public inbox for gentoo-commits@lists.gentoo.org
 help / color / mirror / Atom feed
* [gentoo-commits] proj/linux-patches:5.6 commit in: /
@ 2020-03-30 11:15 Mike Pagano
  0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2020-03-30 11:15 UTC (permalink / raw
  To: gentoo-commits

commit:     0aaa0274386354bea7c325f5f8d9c25643c667b7
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Mar 30 11:14:19 2020 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Mar 30 11:14:19 2020 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=0aaa0274

tmp513 requies REGMAP_I2C to build.

Select it by default in Kconfig. See bug #710790.
Thanks to Phil Stracchino

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                        |  4 +++
 ...3-Fix-build-issue-by-selecting-CONFIG_REG.patch | 30 ++++++++++++++++++++++
 2 files changed, 34 insertions(+)

diff --git a/0000_README b/0000_README
index 7c240ef..4bc51da 100644
--- a/0000_README
+++ b/0000_README
@@ -63,6 +63,10 @@ Patch:  2600_enable-key-swapping-for-apple-mac.patch
 From:   https://github.com/free5lot/hid-apple-patched
 Desc:   This hid-apple patch enables swapping of the FN and left Control keys and some additional on some apple keyboards. See bug #622902
 
+Patch:  2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
+From:   https://bugs.gentoo.org/710790
+Desc:   tmp513 requies REGMAP_I2C to build.  Select it by default in Kconfig. See bug #710790. Thanks to Phil Stracchino
+
 Patch:  4567_distro-Gentoo-Kconfig.patch
 From:   Tom Wijsman <TomWij@gentoo.org>
 Desc:   Add Gentoo Linux support config settings and defaults.

diff --git a/2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch b/2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
new file mode 100644
index 0000000..4335685
--- /dev/null
+++ b/2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
@@ -0,0 +1,30 @@
+From dc328d75a6f37f4ff11a81ae16b1ec88c3197640 Mon Sep 17 00:00:00 2001
+From: Mike Pagano <mpagano@gentoo.org>
+Date: Mon, 23 Mar 2020 08:20:06 -0400
+Subject: [PATCH 1/1] This driver requires REGMAP_I2C to build.  Select it by
+ default in Kconfig. Reported at gentoo bugzilla:
+ https://bugs.gentoo.org/710790
+Cc: mpagano@gentoo.org
+
+Reported-by: Phil Stracchino <phils@caerllewys.net>
+
+Signed-off-by: Mike Pagano <mpagano@gentoo.org>
+---
+ drivers/hwmon/Kconfig | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/drivers/hwmon/Kconfig b/drivers/hwmon/Kconfig
+index 47ac20aee06f..530b4f29ba85 100644
+--- a/drivers/hwmon/Kconfig
++++ b/drivers/hwmon/Kconfig
+@@ -1769,6 +1769,7 @@ config SENSORS_TMP421
+ config SENSORS_TMP513
+ 	tristate "Texas Instruments TMP513 and compatibles"
+ 	depends on I2C
++	select REGMAP_I2C
+ 	help
+ 	  If you say yes here you get support for Texas Instruments TMP512,
+ 	  and TMP513 temperature and power supply sensor chips.
+-- 
+2.24.1
+


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [gentoo-commits] proj/linux-patches:5.6 commit in: /
@ 2020-03-30 11:33 Mike Pagano
  0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2020-03-30 11:33 UTC (permalink / raw
  To: gentoo-commits

commit:     e2008c5b7f70e3fa73c2911aceab812431d7b058
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Mar 30 11:32:57 2020 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Mar 30 11:32:57 2020 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=e2008c5b

Remove incompatible patch

2400_iwlwifi-PHY_SKU-NVM-3168-fix.patch

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                             |  4 ----
 2400_iwlwifi-PHY_SKU-NVM-3168-fix.patch | 14 --------------
 2 files changed, 18 deletions(-)

diff --git a/0000_README b/0000_README
index 4bc51da..fd3ee5e 100644
--- a/0000_README
+++ b/0000_README
@@ -55,10 +55,6 @@ Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
 
-Patch:  2400_iwlwifi-PHY_SKU-NVM-3168-fix.patch
-From:   https://patchwork.kernel.org/patch/11353871/ 
-Desc:   iwlwifi: mvm: Do not require PHY_SKU NVM section for 3168 devices
-
 Patch:  2600_enable-key-swapping-for-apple-mac.patch
 From:   https://github.com/free5lot/hid-apple-patched
 Desc:   This hid-apple patch enables swapping of the FN and left Control keys and some additional on some apple keyboards. See bug #622902

diff --git a/2400_iwlwifi-PHY_SKU-NVM-3168-fix.patch b/2400_iwlwifi-PHY_SKU-NVM-3168-fix.patch
deleted file mode 100644
index d736a9e..0000000
--- a/2400_iwlwifi-PHY_SKU-NVM-3168-fix.patch
+++ /dev/null
@@ -1,14 +0,0 @@
-diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/nvm.c b/drivers/net/wireless/intel/iwlwifi/mvm/nvm.c
-index 46128a2a9c6e..e98ce380c7b9 100644
---- a/drivers/net/wireless/intel/iwlwifi/mvm/nvm.c
-+++ b/drivers/net/wireless/intel/iwlwifi/mvm/nvm.c
-@@ -308,7 +308,8 @@ iwl_parse_nvm_sections(struct iwl_mvm *mvm)
- 		}
- 
- 		/* PHY_SKU section is mandatory in B0 */
--		if (!mvm->nvm_sections[NVM_SECTION_TYPE_PHY_SKU].data) {
-+		if (mvm->trans->cfg->nvm_type == IWL_NVM_EXT &&
-+		    !mvm->nvm_sections[NVM_SECTION_TYPE_PHY_SKU].data) {
- 			IWL_ERR(mvm,
- 				"Can't parse phy_sku in B0, empty sections\n");
- 			return NULL;


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [gentoo-commits] proj/linux-patches:5.6 commit in: /
@ 2020-03-30 12:31 Mike Pagano
  0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2020-03-30 12:31 UTC (permalink / raw
  To: gentoo-commits

commit:     9fc909669416a6c8a9300e97b0957c016b187378
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Mar 30 12:30:32 2020 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Mar 30 12:30:32 2020 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=9fc90966

mac80211: fix authentication with iwlwifi/mvm

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                    |  4 +++
 2400_mac80211-iwlwifi-authentication-fix.patch | 34 ++++++++++++++++++++++++++
 2 files changed, 38 insertions(+)

diff --git a/0000_README b/0000_README
index fd3ee5e..5080b3d 100644
--- a/0000_README
+++ b/0000_README
@@ -55,6 +55,10 @@ Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
 
+Patch:  2400_mac80211-iwlwifi-authentication-fix.patch
+From:   https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git/patch/?id=be8c827f50a0bcd56361b31ada11dc0a3c2fd240
+Desc:   mac80211: fix authentication with iwlwifi/mvm
+
 Patch:  2600_enable-key-swapping-for-apple-mac.patch
 From:   https://github.com/free5lot/hid-apple-patched
 Desc:   This hid-apple patch enables swapping of the FN and left Control keys and some additional on some apple keyboards. See bug #622902

diff --git a/2400_mac80211-iwlwifi-authentication-fix.patch b/2400_mac80211-iwlwifi-authentication-fix.patch
new file mode 100644
index 0000000..87f14d3
--- /dev/null
+++ b/2400_mac80211-iwlwifi-authentication-fix.patch
@@ -0,0 +1,34 @@
+From be8c827f50a0bcd56361b31ada11dc0a3c2fd240 Mon Sep 17 00:00:00 2001
+From: Johannes Berg <johannes.berg@intel.com>
+Date: Sun, 29 Mar 2020 22:50:06 +0200
+Subject: mac80211: fix authentication with iwlwifi/mvm
+
+The original patch didn't copy the ieee80211_is_data() condition
+because on most drivers the management frames don't go through
+this path. However, they do on iwlwifi/mvm, so we do need to keep
+the condition here.
+
+Cc: stable@vger.kernel.org
+Fixes: ce2e1ca70307 ("mac80211: Check port authorization in the ieee80211_tx_dequeue() case")
+Signed-off-by: Johannes Berg <johannes.berg@intel.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+---
+ net/mac80211/tx.c | 3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
+index d9cca6dbd870..efe4c1fc68e5 100644
+--- a/net/mac80211/tx.c
++++ b/net/mac80211/tx.c
+@@ -3610,7 +3610,8 @@ begin:
+ 		 * Drop unicast frames to unauthorised stations unless they are
+ 		 * EAPOL frames from the local station.
+ 		 */
+-		if (unlikely(!ieee80211_vif_is_mesh(&tx.sdata->vif) &&
++		if (unlikely(ieee80211_is_data(hdr->frame_control) &&
++			     !ieee80211_vif_is_mesh(&tx.sdata->vif) &&
+ 			     tx.sdata->vif.type != NL80211_IFTYPE_OCB &&
+ 			     !is_multicast_ether_addr(hdr->addr1) &&
+ 			     !test_sta_flag(tx.sta, WLAN_STA_AUTHORIZED) &&
+-- 
+cgit 1.2-0.3.lf.el7


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [gentoo-commits] proj/linux-patches:5.6 commit in: /
@ 2020-04-01 12:06 Mike Pagano
  0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2020-04-01 12:06 UTC (permalink / raw
  To: gentoo-commits

commit:     576d2c6121c73c74c140804fb35f5aff0cf01dd0
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Apr  1 12:06:13 2020 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Apr  1 12:06:13 2020 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=576d2c61

Linux patch 5.6.1

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README            |   4 +
 1001_linux-5.6.1.patch | 787 +++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 791 insertions(+)

diff --git a/0000_README b/0000_README
index 5080b3d..e9a8c70 100644
--- a/0000_README
+++ b/0000_README
@@ -43,6 +43,10 @@ EXPERIMENTAL
 Individual Patch Descriptions:
 --------------------------------------------------------------------------
 
+Patch:  1000_linux-5.6.1.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.6.1
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1001_linux-5.6.1.patch b/1001_linux-5.6.1.patch
new file mode 100644
index 0000000..cbde007
--- /dev/null
+++ b/1001_linux-5.6.1.patch
@@ -0,0 +1,787 @@
+diff --git a/Makefile b/Makefile
+index 4d0711f54047..75d17e7f799b 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 6
+-SUBLEVEL = 0
++SUBLEVEL = 1
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+ 
+diff --git a/drivers/ata/ahci.c b/drivers/ata/ahci.c
+index 11ea1aff40db..8c6f8c83dd6f 100644
+--- a/drivers/ata/ahci.c
++++ b/drivers/ata/ahci.c
+@@ -401,6 +401,7 @@ static const struct pci_device_id ahci_pci_tbl[] = {
+ 	{ PCI_VDEVICE(INTEL, 0xa252), board_ahci }, /* Lewisburg RAID*/
+ 	{ PCI_VDEVICE(INTEL, 0xa256), board_ahci }, /* Lewisburg RAID*/
+ 	{ PCI_VDEVICE(INTEL, 0xa356), board_ahci }, /* Cannon Lake PCH-H RAID */
++	{ PCI_VDEVICE(INTEL, 0x06d7), board_ahci }, /* Comet Lake-H RAID */
+ 	{ PCI_VDEVICE(INTEL, 0x0f22), board_ahci_mobile }, /* Bay Trail AHCI */
+ 	{ PCI_VDEVICE(INTEL, 0x0f23), board_ahci_mobile }, /* Bay Trail AHCI */
+ 	{ PCI_VDEVICE(INTEL, 0x22a3), board_ahci_mobile }, /* Cherry Tr. AHCI */
+diff --git a/drivers/media/usb/b2c2/flexcop-usb.c b/drivers/media/usb/b2c2/flexcop-usb.c
+index 039963a7765b..198ddfb8d2b1 100644
+--- a/drivers/media/usb/b2c2/flexcop-usb.c
++++ b/drivers/media/usb/b2c2/flexcop-usb.c
+@@ -511,6 +511,9 @@ static int flexcop_usb_init(struct flexcop_usb *fc_usb)
+ 		return ret;
+ 	}
+ 
++	if (fc_usb->uintf->cur_altsetting->desc.bNumEndpoints < 1)
++		return -ENODEV;
++
+ 	switch (fc_usb->udev->speed) {
+ 	case USB_SPEED_LOW:
+ 		err("cannot handle USB speed because it is too slow.");
+@@ -544,9 +547,6 @@ static int flexcop_usb_probe(struct usb_interface *intf,
+ 	struct flexcop_device *fc = NULL;
+ 	int ret;
+ 
+-	if (intf->cur_altsetting->desc.bNumEndpoints < 1)
+-		return -ENODEV;
+-
+ 	if ((fc = flexcop_device_kmalloc(sizeof(struct flexcop_usb))) == NULL) {
+ 		err("out of memory\n");
+ 		return -ENOMEM;
+diff --git a/drivers/media/usb/dvb-usb/dib0700_core.c b/drivers/media/usb/dvb-usb/dib0700_core.c
+index e53c58ab6488..ef62dd6c5ae4 100644
+--- a/drivers/media/usb/dvb-usb/dib0700_core.c
++++ b/drivers/media/usb/dvb-usb/dib0700_core.c
+@@ -818,7 +818,7 @@ int dib0700_rc_setup(struct dvb_usb_device *d, struct usb_interface *intf)
+ 
+ 	/* Starting in firmware 1.20, the RC info is provided on a bulk pipe */
+ 
+-	if (intf->altsetting[0].desc.bNumEndpoints < rc_ep + 1)
++	if (intf->cur_altsetting->desc.bNumEndpoints < rc_ep + 1)
+ 		return -ENODEV;
+ 
+ 	purb = usb_alloc_urb(0, GFP_KERNEL);
+@@ -838,7 +838,7 @@ int dib0700_rc_setup(struct dvb_usb_device *d, struct usb_interface *intf)
+ 	 * Some devices like the Hauppauge NovaTD model 52009 use an interrupt
+ 	 * endpoint, while others use a bulk one.
+ 	 */
+-	e = &intf->altsetting[0].endpoint[rc_ep].desc;
++	e = &intf->cur_altsetting->endpoint[rc_ep].desc;
+ 	if (usb_endpoint_dir_in(e)) {
+ 		if (usb_endpoint_xfer_bulk(e)) {
+ 			pipe = usb_rcvbulkpipe(d->udev, rc_ep);
+diff --git a/drivers/media/usb/gspca/ov519.c b/drivers/media/usb/gspca/ov519.c
+index f417dfc0b872..0afe70a3f9a2 100644
+--- a/drivers/media/usb/gspca/ov519.c
++++ b/drivers/media/usb/gspca/ov519.c
+@@ -3477,6 +3477,11 @@ static void ov511_mode_init_regs(struct sd *sd)
+ 		return;
+ 	}
+ 
++	if (alt->desc.bNumEndpoints < 1) {
++		sd->gspca_dev.usb_err = -ENODEV;
++		return;
++	}
++
+ 	packet_size = le16_to_cpu(alt->endpoint[0].desc.wMaxPacketSize);
+ 	reg_w(sd, R51x_FIFO_PSIZE, packet_size >> 5);
+ 
+@@ -3603,6 +3608,11 @@ static void ov518_mode_init_regs(struct sd *sd)
+ 		return;
+ 	}
+ 
++	if (alt->desc.bNumEndpoints < 1) {
++		sd->gspca_dev.usb_err = -ENODEV;
++		return;
++	}
++
+ 	packet_size = le16_to_cpu(alt->endpoint[0].desc.wMaxPacketSize);
+ 	ov518_reg_w32(sd, R51x_FIFO_PSIZE, packet_size & ~7, 2);
+ 
+diff --git a/drivers/media/usb/gspca/stv06xx/stv06xx.c b/drivers/media/usb/gspca/stv06xx/stv06xx.c
+index 79653d409951..95673fc0a99c 100644
+--- a/drivers/media/usb/gspca/stv06xx/stv06xx.c
++++ b/drivers/media/usb/gspca/stv06xx/stv06xx.c
+@@ -282,6 +282,9 @@ static int stv06xx_start(struct gspca_dev *gspca_dev)
+ 		return -EIO;
+ 	}
+ 
++	if (alt->desc.bNumEndpoints < 1)
++		return -ENODEV;
++
+ 	packet_size = le16_to_cpu(alt->endpoint[0].desc.wMaxPacketSize);
+ 	err = stv06xx_write_bridge(sd, STV_ISO_SIZE_L, packet_size);
+ 	if (err < 0)
+@@ -306,11 +309,21 @@ out:
+ 
+ static int stv06xx_isoc_init(struct gspca_dev *gspca_dev)
+ {
++	struct usb_interface_cache *intfc;
+ 	struct usb_host_interface *alt;
+ 	struct sd *sd = (struct sd *) gspca_dev;
+ 
++	intfc = gspca_dev->dev->actconfig->intf_cache[0];
++
++	if (intfc->num_altsetting < 2)
++		return -ENODEV;
++
++	alt = &intfc->altsetting[1];
++
++	if (alt->desc.bNumEndpoints < 1)
++		return -ENODEV;
++
+ 	/* Start isoc bandwidth "negotiation" at max isoc bandwidth */
+-	alt = &gspca_dev->dev->actconfig->intf_cache[0]->altsetting[1];
+ 	alt->endpoint[0].desc.wMaxPacketSize =
+ 		cpu_to_le16(sd->sensor->max_packet_size[gspca_dev->curr_mode]);
+ 
+@@ -323,6 +336,10 @@ static int stv06xx_isoc_nego(struct gspca_dev *gspca_dev)
+ 	struct usb_host_interface *alt;
+ 	struct sd *sd = (struct sd *) gspca_dev;
+ 
++	/*
++	 * Existence of altsetting and endpoint was verified in
++	 * stv06xx_isoc_init()
++	 */
+ 	alt = &gspca_dev->dev->actconfig->intf_cache[0]->altsetting[1];
+ 	packet_size = le16_to_cpu(alt->endpoint[0].desc.wMaxPacketSize);
+ 	min_packet_size = sd->sensor->min_packet_size[gspca_dev->curr_mode];
+diff --git a/drivers/media/usb/gspca/stv06xx/stv06xx_pb0100.c b/drivers/media/usb/gspca/stv06xx/stv06xx_pb0100.c
+index 6d1007715ff7..ae382b3b5f7f 100644
+--- a/drivers/media/usb/gspca/stv06xx/stv06xx_pb0100.c
++++ b/drivers/media/usb/gspca/stv06xx/stv06xx_pb0100.c
+@@ -185,6 +185,10 @@ static int pb0100_start(struct sd *sd)
+ 	alt = usb_altnum_to_altsetting(intf, sd->gspca_dev.alt);
+ 	if (!alt)
+ 		return -ENODEV;
++
++	if (alt->desc.bNumEndpoints < 1)
++		return -ENODEV;
++
+ 	packet_size = le16_to_cpu(alt->endpoint[0].desc.wMaxPacketSize);
+ 
+ 	/* If we don't have enough bandwidth use a lower framerate */
+diff --git a/drivers/media/usb/gspca/xirlink_cit.c b/drivers/media/usb/gspca/xirlink_cit.c
+index 934a90bd78c2..c579b100f066 100644
+--- a/drivers/media/usb/gspca/xirlink_cit.c
++++ b/drivers/media/usb/gspca/xirlink_cit.c
+@@ -1442,6 +1442,9 @@ static int cit_get_packet_size(struct gspca_dev *gspca_dev)
+ 		return -EIO;
+ 	}
+ 
++	if (alt->desc.bNumEndpoints < 1)
++		return -ENODEV;
++
+ 	return le16_to_cpu(alt->endpoint[0].desc.wMaxPacketSize);
+ }
+ 
+@@ -2626,6 +2629,7 @@ static int sd_start(struct gspca_dev *gspca_dev)
+ 
+ static int sd_isoc_init(struct gspca_dev *gspca_dev)
+ {
++	struct usb_interface_cache *intfc;
+ 	struct usb_host_interface *alt;
+ 	int max_packet_size;
+ 
+@@ -2641,8 +2645,17 @@ static int sd_isoc_init(struct gspca_dev *gspca_dev)
+ 		break;
+ 	}
+ 
++	intfc = gspca_dev->dev->actconfig->intf_cache[0];
++
++	if (intfc->num_altsetting < 2)
++		return -ENODEV;
++
++	alt = &intfc->altsetting[1];
++
++	if (alt->desc.bNumEndpoints < 1)
++		return -ENODEV;
++
+ 	/* Start isoc bandwidth "negotiation" at max isoc bandwidth */
+-	alt = &gspca_dev->dev->actconfig->intf_cache[0]->altsetting[1];
+ 	alt->endpoint[0].desc.wMaxPacketSize = cpu_to_le16(max_packet_size);
+ 
+ 	return 0;
+@@ -2665,6 +2678,9 @@ static int sd_isoc_nego(struct gspca_dev *gspca_dev)
+ 		break;
+ 	}
+ 
++	/*
++	 * Existence of altsetting and endpoint was verified in sd_isoc_init()
++	 */
+ 	alt = &gspca_dev->dev->actconfig->intf_cache[0]->altsetting[1];
+ 	packet_size = le16_to_cpu(alt->endpoint[0].desc.wMaxPacketSize);
+ 	if (packet_size <= min_packet_size)
+diff --git a/drivers/media/usb/usbtv/usbtv-core.c b/drivers/media/usb/usbtv/usbtv-core.c
+index 5095c380b2c1..ee9c656d121f 100644
+--- a/drivers/media/usb/usbtv/usbtv-core.c
++++ b/drivers/media/usb/usbtv/usbtv-core.c
+@@ -56,7 +56,7 @@ int usbtv_set_regs(struct usbtv *usbtv, const u16 regs[][2], int size)
+ 
+ 		ret = usb_control_msg(usbtv->udev, pipe, USBTV_REQUEST_REG,
+ 			USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+-			value, index, NULL, 0, 0);
++			value, index, NULL, 0, USB_CTRL_GET_TIMEOUT);
+ 		if (ret < 0)
+ 			return ret;
+ 	}
+diff --git a/drivers/media/usb/usbtv/usbtv-video.c b/drivers/media/usb/usbtv/usbtv-video.c
+index 3d9284a09ee5..b249f037900c 100644
+--- a/drivers/media/usb/usbtv/usbtv-video.c
++++ b/drivers/media/usb/usbtv/usbtv-video.c
+@@ -800,7 +800,8 @@ static int usbtv_s_ctrl(struct v4l2_ctrl *ctrl)
+ 		ret = usb_control_msg(usbtv->udev,
+ 			usb_rcvctrlpipe(usbtv->udev, 0), USBTV_CONTROL_REG,
+ 			USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+-			0, USBTV_BASE + 0x0244, (void *)data, 3, 0);
++			0, USBTV_BASE + 0x0244, (void *)data, 3,
++			USB_CTRL_GET_TIMEOUT);
+ 		if (ret < 0)
+ 			goto error;
+ 	}
+@@ -851,7 +852,7 @@ static int usbtv_s_ctrl(struct v4l2_ctrl *ctrl)
+ 	ret = usb_control_msg(usbtv->udev, usb_sndctrlpipe(usbtv->udev, 0),
+ 			USBTV_CONTROL_REG,
+ 			USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+-			0, index, (void *)data, size, 0);
++			0, index, (void *)data, size, USB_CTRL_SET_TIMEOUT);
+ 
+ error:
+ 	if (ret < 0)
+diff --git a/drivers/media/v4l2-core/v4l2-device.c b/drivers/media/v4l2-core/v4l2-device.c
+index 63d6b147b21e..41da73ce2e98 100644
+--- a/drivers/media/v4l2-core/v4l2-device.c
++++ b/drivers/media/v4l2-core/v4l2-device.c
+@@ -179,6 +179,7 @@ static void v4l2_subdev_release(struct v4l2_subdev *sd)
+ 
+ 	if (sd->internal_ops && sd->internal_ops->release)
+ 		sd->internal_ops->release(sd);
++	sd->devnode = NULL;
+ 	module_put(owner);
+ }
+ 
+diff --git a/drivers/staging/kpc2000/kpc2000/core.c b/drivers/staging/kpc2000/kpc2000/core.c
+index 93cf28febdf6..7b00d7069e21 100644
+--- a/drivers/staging/kpc2000/kpc2000/core.c
++++ b/drivers/staging/kpc2000/kpc2000/core.c
+@@ -110,10 +110,10 @@ static ssize_t cpld_reconfigure(struct device *dev,
+ 				const char *buf, size_t count)
+ {
+ 	struct kp2000_device *pcard = dev_get_drvdata(dev);
+-	long wr_val;
++	unsigned long wr_val;
+ 	int rv;
+ 
+-	rv = kstrtol(buf, 0, &wr_val);
++	rv = kstrtoul(buf, 0, &wr_val);
+ 	if (rv < 0)
+ 		return rv;
+ 	if (wr_val > 7)
+diff --git a/drivers/staging/rtl8188eu/os_dep/usb_intf.c b/drivers/staging/rtl8188eu/os_dep/usb_intf.c
+index 845c8817281c..f7f09c0d273f 100644
+--- a/drivers/staging/rtl8188eu/os_dep/usb_intf.c
++++ b/drivers/staging/rtl8188eu/os_dep/usb_intf.c
+@@ -32,6 +32,7 @@ static const struct usb_device_id rtw_usb_id_tbl[] = {
+ 	/****** 8188EUS ********/
+ 	{USB_DEVICE(0x056e, 0x4008)}, /* Elecom WDC-150SU2M */
+ 	{USB_DEVICE(0x07b8, 0x8179)}, /* Abocom - Abocom */
++	{USB_DEVICE(0x0B05, 0x18F0)}, /* ASUS USB-N10 Nano B1 */
+ 	{USB_DEVICE(0x2001, 0x330F)}, /* DLink DWA-125 REV D1 */
+ 	{USB_DEVICE(0x2001, 0x3310)}, /* Dlink DWA-123 REV D1 */
+ 	{USB_DEVICE(0x2001, 0x3311)}, /* DLink GO-USB-N150 REV B1 */
+diff --git a/drivers/staging/wfx/Documentation/devicetree/bindings/net/wireless/siliabs,wfx.txt b/drivers/staging/wfx/Documentation/devicetree/bindings/net/wireless/siliabs,wfx.txt
+index 081d58abd5ac..fca6357e1d45 100644
+--- a/drivers/staging/wfx/Documentation/devicetree/bindings/net/wireless/siliabs,wfx.txt
++++ b/drivers/staging/wfx/Documentation/devicetree/bindings/net/wireless/siliabs,wfx.txt
+@@ -6,7 +6,7 @@ SPI
+ You have to declare the WFxxx chip in your device tree.
+ 
+ Required properties:
+- - compatible: Should be "silabs,wfx-spi"
++ - compatible: Should be "silabs,wf200"
+  - reg: Chip select address of device
+  - spi-max-frequency: Maximum SPI clocking speed of device in Hz
+  - interrupts-extended: Should contain interrupt line (interrupt-parent +
+@@ -15,6 +15,7 @@ Required properties:
+ Optional properties:
+  - reset-gpios: phandle of gpio that will be used to reset chip during probe.
+    Without this property, you may encounter issues with warm boot.
++   (Legacy: when compatible == "silabs,wfx-spi", the gpio is inverted.)
+ 
+ Please consult Documentation/devicetree/bindings/spi/spi-bus.txt for optional
+ SPI connection related properties,
+@@ -23,12 +24,12 @@ Example:
+ 
+ &spi1 {
+ 	wfx {
+-		compatible = "silabs,wfx-spi";
++		compatible = "silabs,wf200";
+ 		pinctrl-names = "default";
+ 		pinctrl-0 = <&wfx_irq &wfx_gpios>;
+ 		interrupts-extended = <&gpio 16 IRQ_TYPE_EDGE_RISING>;
+ 		wakeup-gpios = <&gpio 12 GPIO_ACTIVE_HIGH>;
+-		reset-gpios = <&gpio 13 GPIO_ACTIVE_HIGH>;
++		reset-gpios = <&gpio 13 GPIO_ACTIVE_LOW>;
+ 		reg = <0>;
+ 		spi-max-frequency = <42000000>;
+ 	};
+diff --git a/drivers/staging/wfx/bus_sdio.c b/drivers/staging/wfx/bus_sdio.c
+index f8901164c206..5450bd5e1b5d 100644
+--- a/drivers/staging/wfx/bus_sdio.c
++++ b/drivers/staging/wfx/bus_sdio.c
+@@ -200,25 +200,23 @@ static int wfx_sdio_probe(struct sdio_func *func,
+ 	if (ret)
+ 		goto err0;
+ 
+-	ret = wfx_sdio_irq_subscribe(bus);
+-	if (ret)
+-		goto err1;
+-
+ 	bus->core = wfx_init_common(&func->dev, &wfx_sdio_pdata,
+ 				    &wfx_sdio_hwbus_ops, bus);
+ 	if (!bus->core) {
+ 		ret = -EIO;
+-		goto err2;
++		goto err1;
+ 	}
+ 
++	ret = wfx_sdio_irq_subscribe(bus);
++	if (ret)
++		goto err1;
++
+ 	ret = wfx_probe(bus->core);
+ 	if (ret)
+-		goto err3;
++		goto err2;
+ 
+ 	return 0;
+ 
+-err3:
+-	wfx_free_common(bus->core);
+ err2:
+ 	wfx_sdio_irq_unsubscribe(bus);
+ err1:
+@@ -234,7 +232,6 @@ static void wfx_sdio_remove(struct sdio_func *func)
+ 	struct wfx_sdio_priv *bus = sdio_get_drvdata(func);
+ 
+ 	wfx_release(bus->core);
+-	wfx_free_common(bus->core);
+ 	wfx_sdio_irq_unsubscribe(bus);
+ 	sdio_claim_host(func);
+ 	sdio_disable_func(func);
+diff --git a/drivers/staging/wfx/bus_spi.c b/drivers/staging/wfx/bus_spi.c
+index 40bc33035de2..d6a75bd61595 100644
+--- a/drivers/staging/wfx/bus_spi.c
++++ b/drivers/staging/wfx/bus_spi.c
+@@ -27,6 +27,8 @@ MODULE_PARM_DESC(gpio_reset, "gpio number for reset. -1 for none.");
+ #define SET_WRITE 0x7FFF        /* usage: and operation */
+ #define SET_READ 0x8000         /* usage: or operation */
+ 
++#define WFX_RESET_INVERTED 1
++
+ static const struct wfx_platform_data wfx_spi_pdata = {
+ 	.file_fw = "wfm_wf200",
+ 	.file_pds = "wf200.pds",
+@@ -154,6 +156,11 @@ static void wfx_spi_request_rx(struct work_struct *work)
+ 	wfx_bh_request_rx(bus->core);
+ }
+ 
++static void wfx_flush_irq_work(void *w)
++{
++	flush_work(w);
++}
++
+ static size_t wfx_spi_align_size(void *priv, size_t size)
+ {
+ 	// Most of SPI controllers avoid DMA if buffer size is not 32bit aligned
+@@ -201,28 +208,31 @@ static int wfx_spi_probe(struct spi_device *func)
+ 	if (!bus->gpio_reset) {
+ 		dev_warn(&func->dev, "try to load firmware anyway\n");
+ 	} else {
+-		gpiod_set_value(bus->gpio_reset, 0);
+-		udelay(100);
++		if (spi_get_device_id(func)->driver_data & WFX_RESET_INVERTED)
++			gpiod_toggle_active_low(bus->gpio_reset);
+ 		gpiod_set_value(bus->gpio_reset, 1);
++		udelay(100);
++		gpiod_set_value(bus->gpio_reset, 0);
+ 		udelay(2000);
+ 	}
+ 
+-	ret = devm_request_irq(&func->dev, func->irq, wfx_spi_irq_handler,
+-			       IRQF_TRIGGER_RISING, "wfx", bus);
+-	if (ret)
+-		return ret;
+-
+ 	INIT_WORK(&bus->request_rx, wfx_spi_request_rx);
+ 	bus->core = wfx_init_common(&func->dev, &wfx_spi_pdata,
+ 				    &wfx_spi_hwbus_ops, bus);
+ 	if (!bus->core)
+ 		return -EIO;
+ 
+-	ret = wfx_probe(bus->core);
++	ret = devm_add_action_or_reset(&func->dev, wfx_flush_irq_work,
++				       &bus->request_rx);
+ 	if (ret)
+-		wfx_free_common(bus->core);
++		return ret;
+ 
+-	return ret;
++	ret = devm_request_irq(&func->dev, func->irq, wfx_spi_irq_handler,
++			       IRQF_TRIGGER_RISING, "wfx", bus);
++	if (ret)
++		return ret;
++
++	return wfx_probe(bus->core);
+ }
+ 
+ static int wfx_spi_remove(struct spi_device *func)
+@@ -230,11 +240,6 @@ static int wfx_spi_remove(struct spi_device *func)
+ 	struct wfx_spi_priv *bus = spi_get_drvdata(func);
+ 
+ 	wfx_release(bus->core);
+-	wfx_free_common(bus->core);
+-	// A few IRQ will be sent during device release. Hopefully, no IRQ
+-	// should happen after wdev/wvif are released.
+-	devm_free_irq(&func->dev, func->irq, bus);
+-	flush_work(&bus->request_rx);
+ 	return 0;
+ }
+ 
+@@ -244,14 +249,16 @@ static int wfx_spi_remove(struct spi_device *func)
+  * stripped.
+  */
+ static const struct spi_device_id wfx_spi_id[] = {
+-	{ "wfx-spi", 0 },
++	{ "wfx-spi", WFX_RESET_INVERTED },
++	{ "wf200", 0 },
+ 	{ },
+ };
+ MODULE_DEVICE_TABLE(spi, wfx_spi_id);
+ 
+ #ifdef CONFIG_OF
+ static const struct of_device_id wfx_spi_of_match[] = {
+-	{ .compatible = "silabs,wfx-spi" },
++	{ .compatible = "silabs,wfx-spi", .data = (void *)WFX_RESET_INVERTED },
++	{ .compatible = "silabs,wf200" },
+ 	{ },
+ };
+ MODULE_DEVICE_TABLE(of, wfx_spi_of_match);
+diff --git a/drivers/staging/wfx/main.c b/drivers/staging/wfx/main.c
+index 84adad64fc30..76b2ff7fc7fe 100644
+--- a/drivers/staging/wfx/main.c
++++ b/drivers/staging/wfx/main.c
+@@ -262,6 +262,16 @@ static int wfx_send_pdata_pds(struct wfx_dev *wdev)
+ 	return ret;
+ }
+ 
++static void wfx_free_common(void *data)
++{
++	struct wfx_dev *wdev = data;
++
++	mutex_destroy(&wdev->rx_stats_lock);
++	mutex_destroy(&wdev->conf_mutex);
++	wfx_tx_queues_deinit(wdev);
++	ieee80211_free_hw(wdev->hw);
++}
++
+ struct wfx_dev *wfx_init_common(struct device *dev,
+ 				const struct wfx_platform_data *pdata,
+ 				const struct hwbus_ops *hwbus_ops,
+@@ -332,15 +342,10 @@ struct wfx_dev *wfx_init_common(struct device *dev,
+ 	wfx_init_hif_cmd(&wdev->hif_cmd);
+ 	wfx_tx_queues_init(wdev);
+ 
+-	return wdev;
+-}
++	if (devm_add_action_or_reset(dev, wfx_free_common, wdev))
++		return NULL;
+ 
+-void wfx_free_common(struct wfx_dev *wdev)
+-{
+-	mutex_destroy(&wdev->rx_stats_lock);
+-	mutex_destroy(&wdev->conf_mutex);
+-	wfx_tx_queues_deinit(wdev);
+-	ieee80211_free_hw(wdev->hw);
++	return wdev;
+ }
+ 
+ int wfx_probe(struct wfx_dev *wdev)
+diff --git a/drivers/staging/wfx/main.h b/drivers/staging/wfx/main.h
+index 875f8c227803..9c9410072def 100644
+--- a/drivers/staging/wfx/main.h
++++ b/drivers/staging/wfx/main.h
+@@ -34,7 +34,6 @@ struct wfx_dev *wfx_init_common(struct device *dev,
+ 				const struct wfx_platform_data *pdata,
+ 				const struct hwbus_ops *hwbus_ops,
+ 				void *hwbus_priv);
+-void wfx_free_common(struct wfx_dev *wdev);
+ 
+ int wfx_probe(struct wfx_dev *wdev);
+ void wfx_release(struct wfx_dev *wdev);
+diff --git a/drivers/staging/wfx/queue.c b/drivers/staging/wfx/queue.c
+index 0bcc61feee1d..51d6c55ae91f 100644
+--- a/drivers/staging/wfx/queue.c
++++ b/drivers/staging/wfx/queue.c
+@@ -130,12 +130,12 @@ static void wfx_tx_queue_clear(struct wfx_dev *wdev, struct wfx_queue *queue,
+ 	spin_lock_bh(&queue->queue.lock);
+ 	while ((item = __skb_dequeue(&queue->queue)) != NULL)
+ 		skb_queue_head(gc_list, item);
+-	spin_lock_bh(&stats->pending.lock);
++	spin_lock_nested(&stats->pending.lock, 1);
+ 	for (i = 0; i < ARRAY_SIZE(stats->link_map_cache); ++i) {
+ 		stats->link_map_cache[i] -= queue->link_map_cache[i];
+ 		queue->link_map_cache[i] = 0;
+ 	}
+-	spin_unlock_bh(&stats->pending.lock);
++	spin_unlock(&stats->pending.lock);
+ 	spin_unlock_bh(&queue->queue.lock);
+ }
+ 
+@@ -207,9 +207,9 @@ void wfx_tx_queue_put(struct wfx_dev *wdev, struct wfx_queue *queue,
+ 
+ 	++queue->link_map_cache[tx_priv->link_id];
+ 
+-	spin_lock_bh(&stats->pending.lock);
++	spin_lock_nested(&stats->pending.lock, 1);
+ 	++stats->link_map_cache[tx_priv->link_id];
+-	spin_unlock_bh(&stats->pending.lock);
++	spin_unlock(&stats->pending.lock);
+ 	spin_unlock_bh(&queue->queue.lock);
+ }
+ 
+@@ -237,11 +237,11 @@ static struct sk_buff *wfx_tx_queue_get(struct wfx_dev *wdev,
+ 		__skb_unlink(skb, &queue->queue);
+ 		--queue->link_map_cache[tx_priv->link_id];
+ 
+-		spin_lock_bh(&stats->pending.lock);
++		spin_lock_nested(&stats->pending.lock, 1);
+ 		__skb_queue_tail(&stats->pending, skb);
+ 		if (!--stats->link_map_cache[tx_priv->link_id])
+ 			wakeup_stats = true;
+-		spin_unlock_bh(&stats->pending.lock);
++		spin_unlock(&stats->pending.lock);
+ 	}
+ 	spin_unlock_bh(&queue->queue.lock);
+ 	if (wakeup_stats)
+@@ -259,10 +259,10 @@ int wfx_pending_requeue(struct wfx_dev *wdev, struct sk_buff *skb)
+ 	spin_lock_bh(&queue->queue.lock);
+ 	++queue->link_map_cache[tx_priv->link_id];
+ 
+-	spin_lock_bh(&stats->pending.lock);
++	spin_lock_nested(&stats->pending.lock, 1);
+ 	++stats->link_map_cache[tx_priv->link_id];
+ 	__skb_unlink(skb, &stats->pending);
+-	spin_unlock_bh(&stats->pending.lock);
++	spin_unlock(&stats->pending.lock);
+ 	__skb_queue_tail(&queue->queue, skb);
+ 	spin_unlock_bh(&queue->queue.lock);
+ 	return 0;
+diff --git a/drivers/staging/wlan-ng/hfa384x_usb.c b/drivers/staging/wlan-ng/hfa384x_usb.c
+index b71756ab0394..7fe64fcd385d 100644
+--- a/drivers/staging/wlan-ng/hfa384x_usb.c
++++ b/drivers/staging/wlan-ng/hfa384x_usb.c
+@@ -3372,6 +3372,8 @@ static void hfa384x_int_rxmonitor(struct wlandevice *wlandev,
+ 	     WLAN_HDR_A4_LEN + WLAN_DATA_MAXLEN + WLAN_CRC_LEN)) {
+ 		pr_debug("overlen frm: len=%zd\n",
+ 			 skblen - sizeof(struct p80211_caphdr));
++
++		return;
+ 	}
+ 
+ 	skb = dev_alloc_skb(skblen);
+diff --git a/drivers/staging/wlan-ng/prism2usb.c b/drivers/staging/wlan-ng/prism2usb.c
+index 352556f6870a..4689b2170e4f 100644
+--- a/drivers/staging/wlan-ng/prism2usb.c
++++ b/drivers/staging/wlan-ng/prism2usb.c
+@@ -180,6 +180,7 @@ static void prism2sta_disconnect_usb(struct usb_interface *interface)
+ 
+ 		cancel_work_sync(&hw->link_bh);
+ 		cancel_work_sync(&hw->commsqual_bh);
++		cancel_work_sync(&hw->usb_work);
+ 
+ 		/* Now we complete any outstanding commands
+ 		 * and tell everyone who is waiting for their
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index 47f09a6ce7bd..84d6f7df09a4 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -923,16 +923,16 @@ static int set_serial_info(struct tty_struct *tty, struct serial_struct *ss)
+ 
+ 	mutex_lock(&acm->port.mutex);
+ 
+-	if ((ss->close_delay != old_close_delay) ||
+-            (ss->closing_wait != old_closing_wait)) {
+-		if (!capable(CAP_SYS_ADMIN))
++	if (!capable(CAP_SYS_ADMIN)) {
++		if ((ss->close_delay != old_close_delay) ||
++		    (ss->closing_wait != old_closing_wait))
+ 			retval = -EPERM;
+-		else {
+-			acm->port.close_delay  = close_delay;
+-			acm->port.closing_wait = closing_wait;
+-		}
+-	} else
+-		retval = -EOPNOTSUPP;
++		else
++			retval = -EOPNOTSUPP;
++	} else {
++		acm->port.close_delay  = close_delay;
++		acm->port.closing_wait = closing_wait;
++	}
+ 
+ 	mutex_unlock(&acm->port.mutex);
+ 	return retval;
+diff --git a/drivers/usb/musb/musb_host.c b/drivers/usb/musb/musb_host.c
+index 886c9b602f8c..5267ad2989ee 100644
+--- a/drivers/usb/musb/musb_host.c
++++ b/drivers/usb/musb/musb_host.c
+@@ -1436,10 +1436,7 @@ done:
+ 	 * We need to map sg if the transfer_buffer is
+ 	 * NULL.
+ 	 */
+-	if (!urb->transfer_buffer)
+-		qh->use_sg = true;
+-
+-	if (qh->use_sg) {
++	if (!urb->transfer_buffer) {
+ 		/* sg_miter_start is already done in musb_ep_program */
+ 		if (!sg_miter_next(&qh->sg_miter)) {
+ 			dev_err(musb->controller, "error: sg list empty\n");
+@@ -1447,9 +1444,8 @@ done:
+ 			status = -EINVAL;
+ 			goto done;
+ 		}
+-		urb->transfer_buffer = qh->sg_miter.addr;
+ 		length = min_t(u32, length, qh->sg_miter.length);
+-		musb_write_fifo(hw_ep, length, urb->transfer_buffer);
++		musb_write_fifo(hw_ep, length, qh->sg_miter.addr);
+ 		qh->sg_miter.consumed = length;
+ 		sg_miter_stop(&qh->sg_miter);
+ 	} else {
+@@ -1458,11 +1454,6 @@ done:
+ 
+ 	qh->segsize = length;
+ 
+-	if (qh->use_sg) {
+-		if (offset + length >= urb->transfer_buffer_length)
+-			qh->use_sg = false;
+-	}
+-
+ 	musb_ep_select(mbase, epnum);
+ 	musb_writew(epio, MUSB_TXCSR,
+ 			MUSB_TXCSR_H_WZC_BITS | MUSB_TXCSR_TXPKTRDY);
+@@ -1977,8 +1968,10 @@ finish:
+ 	urb->actual_length += xfer_len;
+ 	qh->offset += xfer_len;
+ 	if (done) {
+-		if (qh->use_sg)
++		if (qh->use_sg) {
+ 			qh->use_sg = false;
++			urb->transfer_buffer = NULL;
++		}
+ 
+ 		if (urb->status == -EINPROGRESS)
+ 			urb->status = status;
+diff --git a/drivers/usb/serial/io_edgeport.c b/drivers/usb/serial/io_edgeport.c
+index 5737add6a2a4..4cca0b836f43 100644
+--- a/drivers/usb/serial/io_edgeport.c
++++ b/drivers/usb/serial/io_edgeport.c
+@@ -710,7 +710,7 @@ static void edge_interrupt_callback(struct urb *urb)
+ 		/* grab the txcredits for the ports if available */
+ 		position = 2;
+ 		portNumber = 0;
+-		while ((position < length) &&
++		while ((position < length - 1) &&
+ 				(portNumber < edge_serial->serial->num_ports)) {
+ 			txCredits = data[position] | (data[position+1] << 8);
+ 			if (txCredits) {
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 0b5dcf973d94..8bfffca3e4ae 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -1992,8 +1992,14 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x3e01, 0xff, 0xff, 0xff) },	/* D-Link DWM-152/C1 */
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x3e02, 0xff, 0xff, 0xff) },	/* D-Link DWM-156/C1 */
+ 	{ USB_DEVICE_AND_INTERFACE_INFO(0x07d1, 0x7e11, 0xff, 0xff, 0xff) },	/* D-Link DWM-156/A3 */
++	{ USB_DEVICE_INTERFACE_CLASS(0x1435, 0xd191, 0xff),			/* Wistron Neweb D19Q1 */
++	  .driver_info = RSVD(1) | RSVD(4) },
++	{ USB_DEVICE_INTERFACE_CLASS(0x1690, 0x7588, 0xff),			/* ASKEY WWHC050 */
++	  .driver_info = RSVD(1) | RSVD(4) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x2020, 0x2031, 0xff),			/* Olicard 600 */
+ 	  .driver_info = RSVD(4) },
++	{ USB_DEVICE_INTERFACE_CLASS(0x2020, 0x2033, 0xff),			/* BroadMobi BM806U */
++	  .driver_info = RSVD(4) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x2020, 0x2060, 0xff),			/* BroadMobi BM818 */
+ 	  .driver_info = RSVD(4) },
+ 	{ USB_DEVICE_INTERFACE_CLASS(0x2020, 0x4000, 0xff) },			/* OLICARD300 - MT6225 */
+diff --git a/fs/libfs.c b/fs/libfs.c
+index c686bd9caac6..3759fbacf522 100644
+--- a/fs/libfs.c
++++ b/fs/libfs.c
+@@ -891,7 +891,7 @@ int simple_attr_open(struct inode *inode, struct file *file,
+ {
+ 	struct simple_attr *attr;
+ 
+-	attr = kmalloc(sizeof(*attr), GFP_KERNEL);
++	attr = kzalloc(sizeof(*attr), GFP_KERNEL);
+ 	if (!attr)
+ 		return -ENOMEM;
+ 
+@@ -931,9 +931,11 @@ ssize_t simple_attr_read(struct file *file, char __user *buf,
+ 	if (ret)
+ 		return ret;
+ 
+-	if (*ppos) {		/* continued read */
++	if (*ppos && attr->get_buf[0]) {
++		/* continued read */
+ 		size = strlen(attr->get_buf);
+-	} else {		/* first read */
++	} else {
++		/* first read */
+ 		u64 val;
+ 		ret = attr->get(attr->data, &val);
+ 		if (ret)
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 1cc945daa9c8..5080469094af 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -1034,17 +1034,6 @@ static void __reg_bound_offset(struct bpf_reg_state *reg)
+ 						 reg->umax_value));
+ }
+ 
+-static void __reg_bound_offset32(struct bpf_reg_state *reg)
+-{
+-	u64 mask = 0xffffFFFF;
+-	struct tnum range = tnum_range(reg->umin_value & mask,
+-				       reg->umax_value & mask);
+-	struct tnum lo32 = tnum_cast(reg->var_off, 4);
+-	struct tnum hi32 = tnum_lshift(tnum_rshift(reg->var_off, 32), 32);
+-
+-	reg->var_off = tnum_or(hi32, tnum_intersect(lo32, range));
+-}
+-
+ /* Reset the min/max bounds of a register */
+ static void __mark_reg_unbounded(struct bpf_reg_state *reg)
+ {
+@@ -5717,10 +5706,6 @@ static void reg_set_min_max(struct bpf_reg_state *true_reg,
+ 	/* We might have learned some bits from the bounds. */
+ 	__reg_bound_offset(false_reg);
+ 	__reg_bound_offset(true_reg);
+-	if (is_jmp32) {
+-		__reg_bound_offset32(false_reg);
+-		__reg_bound_offset32(true_reg);
+-	}
+ 	/* Intersecting with the old var_off might have improved our bounds
+ 	 * slightly.  e.g. if umax was 0x7f...f and var_off was (0; 0xf...fc),
+ 	 * then new var_off is (0; 0x7f...fc) which improves our umax.
+@@ -5830,10 +5815,6 @@ static void reg_set_min_max_inv(struct bpf_reg_state *true_reg,
+ 	/* We might have learned some bits from the bounds. */
+ 	__reg_bound_offset(false_reg);
+ 	__reg_bound_offset(true_reg);
+-	if (is_jmp32) {
+-		__reg_bound_offset32(false_reg);
+-		__reg_bound_offset32(true_reg);
+-	}
+ 	/* Intersecting with the old var_off might have improved our bounds
+ 	 * slightly.  e.g. if umax was 0x7f...f and var_off was (0; 0xf...fc),
+ 	 * then new var_off is (0; 0x7f...fc) which improves our umax.


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [gentoo-commits] proj/linux-patches:5.6 commit in: /
@ 2020-04-02 11:35 Mike Pagano
  0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2020-04-02 11:35 UTC (permalink / raw
  To: gentoo-commits

commit:     f3d8f6a3913ed86bbee062159fc5f3de485dbb82
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Apr  2 11:35:16 2020 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Apr  2 11:35:16 2020 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=f3d8f6a3

Linux patch 5.6.2

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README            |   8 +-
 1001_linux-5.6.2.patch | 411 +++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 417 insertions(+), 2 deletions(-)

diff --git a/0000_README b/0000_README
index e9a8c70..63c1a01 100644
--- a/0000_README
+++ b/0000_README
@@ -43,9 +43,13 @@ EXPERIMENTAL
 Individual Patch Descriptions:
 --------------------------------------------------------------------------
 
-Patch:  1000_linux-5.6.1.patch
+Patch:  1000_linux-5.1.1.patch
+From:   https://www.kernel.org
+Desc:   Linux 5.1.1
+
+Patch:  1001_linux-5.6.2.patch
 From:   http://www.kernel.org
-Desc:   Linux 5.6.1
+Desc:   Linux 5.6.2
 
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644

diff --git a/1001_linux-5.6.2.patch b/1001_linux-5.6.2.patch
new file mode 100644
index 0000000..0296819
--- /dev/null
+++ b/1001_linux-5.6.2.patch
@@ -0,0 +1,411 @@
+diff --git a/Makefile b/Makefile
+index 75d17e7f799b..680b2d52405f 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 6
+-SUBLEVEL = 1
++SUBLEVEL = 2
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+ 
+diff --git a/drivers/platform/x86/pmc_atom.c b/drivers/platform/x86/pmc_atom.c
+index 3e3c66dfec2e..ca684ed760d1 100644
+--- a/drivers/platform/x86/pmc_atom.c
++++ b/drivers/platform/x86/pmc_atom.c
+@@ -383,6 +383,14 @@ static const struct dmi_system_id critclk_systems[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "3I380D"),
+ 		},
+ 	},
++	{
++		/* pmc_plt_clk* - are used for ethernet controllers */
++		.ident = "Lex 2I385SW",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Lex BayTrail"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "2I385SW"),
++		},
++	},
+ 	{
+ 		/* pmc_plt_clk* - are used for ethernet controllers */
+ 		.ident = "Beckhoff CB3163",
+diff --git a/drivers/tty/serial/sprd_serial.c b/drivers/tty/serial/sprd_serial.c
+index 3d3c70634589..a223e934f8ea 100644
+--- a/drivers/tty/serial/sprd_serial.c
++++ b/drivers/tty/serial/sprd_serial.c
+@@ -1132,14 +1132,13 @@ static int sprd_remove(struct platform_device *dev)
+ 	if (sup) {
+ 		uart_remove_one_port(&sprd_uart_driver, &sup->port);
+ 		sprd_port[sup->port.line] = NULL;
++		sprd_rx_free_buf(sup);
+ 		sprd_ports_num--;
+ 	}
+ 
+ 	if (!sprd_ports_num)
+ 		uart_unregister_driver(&sprd_uart_driver);
+ 
+-	sprd_rx_free_buf(sup);
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/tty/vt/selection.c b/drivers/tty/vt/selection.c
+index d7d2e4b844bc..7556139cd0da 100644
+--- a/drivers/tty/vt/selection.c
++++ b/drivers/tty/vt/selection.c
+@@ -88,6 +88,11 @@ void clear_selection(void)
+ }
+ EXPORT_SYMBOL_GPL(clear_selection);
+ 
++bool vc_is_sel(struct vc_data *vc)
++{
++	return vc == sel_cons;
++}
++
+ /*
+  * User settable table: what characters are to be considered alphabetic?
+  * 128 bits. Locked by the console lock.
+diff --git a/drivers/tty/vt/vt.c b/drivers/tty/vt/vt.c
+index 15d27698054a..b99ac3ebb2b5 100644
+--- a/drivers/tty/vt/vt.c
++++ b/drivers/tty/vt/vt.c
+@@ -890,8 +890,9 @@ static void hide_softcursor(struct vc_data *vc)
+ 
+ static void hide_cursor(struct vc_data *vc)
+ {
+-	if (vc == sel_cons)
++	if (vc_is_sel(vc))
+ 		clear_selection();
++
+ 	vc->vc_sw->con_cursor(vc, CM_ERASE);
+ 	hide_softcursor(vc);
+ }
+@@ -901,7 +902,7 @@ static void set_cursor(struct vc_data *vc)
+ 	if (!con_is_fg(vc) || console_blanked || vc->vc_mode == KD_GRAPHICS)
+ 		return;
+ 	if (vc->vc_deccm) {
+-		if (vc == sel_cons)
++		if (vc_is_sel(vc))
+ 			clear_selection();
+ 		add_softcursor(vc);
+ 		if ((vc->vc_cursor_type & 0x0f) != 1)
+@@ -1074,6 +1075,17 @@ static void visual_deinit(struct vc_data *vc)
+ 	module_put(vc->vc_sw->owner);
+ }
+ 
++static void vc_port_destruct(struct tty_port *port)
++{
++	struct vc_data *vc = container_of(port, struct vc_data, port);
++
++	kfree(vc);
++}
++
++static const struct tty_port_operations vc_port_ops = {
++	.destruct = vc_port_destruct,
++};
++
+ int vc_allocate(unsigned int currcons)	/* return 0 on success */
+ {
+ 	struct vt_notifier_param param;
+@@ -1099,6 +1111,7 @@ int vc_allocate(unsigned int currcons)	/* return 0 on success */
+ 
+ 	vc_cons[currcons].d = vc;
+ 	tty_port_init(&vc->port);
++	vc->port.ops = &vc_port_ops;
+ 	INIT_WORK(&vc_cons[currcons].SAK_work, vc_SAK);
+ 
+ 	visual_init(vc, currcons, 1);
+@@ -1207,7 +1220,7 @@ static int vc_do_resize(struct tty_struct *tty, struct vc_data *vc,
+ 		}
+ 	}
+ 
+-	if (vc == sel_cons)
++	if (vc_is_sel(vc))
+ 		clear_selection();
+ 
+ 	old_rows = vc->vc_rows;
+@@ -3253,6 +3266,7 @@ static int con_install(struct tty_driver *driver, struct tty_struct *tty)
+ 
+ 	tty->driver_data = vc;
+ 	vc->port.tty = tty;
++	tty_port_get(&vc->port);
+ 
+ 	if (!tty->winsize.ws_row && !tty->winsize.ws_col) {
+ 		tty->winsize.ws_row = vc_cons[currcons].d->vc_rows;
+@@ -3288,6 +3302,13 @@ static void con_shutdown(struct tty_struct *tty)
+ 	console_unlock();
+ }
+ 
++static void con_cleanup(struct tty_struct *tty)
++{
++	struct vc_data *vc = tty->driver_data;
++
++	tty_port_put(&vc->port);
++}
++
+ static int default_color           = 7; /* white */
+ static int default_italic_color    = 2; // green (ASCII)
+ static int default_underline_color = 3; // cyan (ASCII)
+@@ -3413,7 +3434,8 @@ static const struct tty_operations con_ops = {
+ 	.throttle = con_throttle,
+ 	.unthrottle = con_unthrottle,
+ 	.resize = vt_resize,
+-	.shutdown = con_shutdown
++	.shutdown = con_shutdown,
++	.cleanup = con_cleanup,
+ };
+ 
+ static struct cdev vc0_cdev;
+diff --git a/drivers/tty/vt/vt_ioctl.c b/drivers/tty/vt/vt_ioctl.c
+index ee6c91ef1f6c..daf61c28ba76 100644
+--- a/drivers/tty/vt/vt_ioctl.c
++++ b/drivers/tty/vt/vt_ioctl.c
+@@ -39,11 +39,32 @@
+ #include <linux/kbd_diacr.h>
+ #include <linux/selection.h>
+ 
+-char vt_dont_switch;
+-extern struct tty_driver *console_driver;
++bool vt_dont_switch;
+ 
+-#define VT_IS_IN_USE(i)	(console_driver->ttys[i] && console_driver->ttys[i]->count)
+-#define VT_BUSY(i)	(VT_IS_IN_USE(i) || i == fg_console || vc_cons[i].d == sel_cons)
++static inline bool vt_in_use(unsigned int i)
++{
++	const struct vc_data *vc = vc_cons[i].d;
++
++	/*
++	 * console_lock must be held to prevent the vc from being deallocated
++	 * while we're checking whether it's in-use.
++	 */
++	WARN_CONSOLE_UNLOCKED();
++
++	return vc && kref_read(&vc->port.kref) > 1;
++}
++
++static inline bool vt_busy(int i)
++{
++	if (vt_in_use(i))
++		return true;
++	if (i == fg_console)
++		return true;
++	if (vc_is_sel(vc_cons[i].d))
++		return true;
++
++	return false;
++}
+ 
+ /*
+  * Console (vt and kd) routines, as defined by USL SVR4 manual, and by
+@@ -289,16 +310,14 @@ static int vt_disallocate(unsigned int vc_num)
+ 	int ret = 0;
+ 
+ 	console_lock();
+-	if (VT_BUSY(vc_num))
++	if (vt_busy(vc_num))
+ 		ret = -EBUSY;
+ 	else if (vc_num)
+ 		vc = vc_deallocate(vc_num);
+ 	console_unlock();
+ 
+-	if (vc && vc_num >= MIN_NR_CONSOLES) {
+-		tty_port_destroy(&vc->port);
+-		kfree(vc);
+-	}
++	if (vc && vc_num >= MIN_NR_CONSOLES)
++		tty_port_put(&vc->port);
+ 
+ 	return ret;
+ }
+@@ -311,17 +330,15 @@ static void vt_disallocate_all(void)
+ 
+ 	console_lock();
+ 	for (i = 1; i < MAX_NR_CONSOLES; i++)
+-		if (!VT_BUSY(i))
++		if (!vt_busy(i))
+ 			vc[i] = vc_deallocate(i);
+ 		else
+ 			vc[i] = NULL;
+ 	console_unlock();
+ 
+ 	for (i = 1; i < MAX_NR_CONSOLES; i++) {
+-		if (vc[i] && i >= MIN_NR_CONSOLES) {
+-			tty_port_destroy(&vc[i]->port);
+-			kfree(vc[i]);
+-		}
++		if (vc[i] && i >= MIN_NR_CONSOLES)
++			tty_port_put(&vc[i]->port);
+ 	}
+ }
+ 
+@@ -335,22 +352,13 @@ int vt_ioctl(struct tty_struct *tty,
+ {
+ 	struct vc_data *vc = tty->driver_data;
+ 	struct console_font_op op;	/* used in multiple places here */
+-	unsigned int console;
++	unsigned int console = vc->vc_num;
+ 	unsigned char ucval;
+ 	unsigned int uival;
+ 	void __user *up = (void __user *)arg;
+ 	int i, perm;
+ 	int ret = 0;
+ 
+-	console = vc->vc_num;
+-
+-
+-	if (!vc_cons_allocated(console)) { 	/* impossible? */
+-		ret = -ENOIOCTLCMD;
+-		goto out;
+-	}
+-
+-
+ 	/*
+ 	 * To have permissions to do most of the vt ioctls, we either have
+ 	 * to be the owner of the tty, or have CAP_SYS_TTY_CONFIG.
+@@ -641,15 +649,16 @@ int vt_ioctl(struct tty_struct *tty,
+ 		struct vt_stat __user *vtstat = up;
+ 		unsigned short state, mask;
+ 
+-		/* Review: FIXME: Console lock ? */
+ 		if (put_user(fg_console + 1, &vtstat->v_active))
+ 			ret = -EFAULT;
+ 		else {
+ 			state = 1;	/* /dev/tty0 is always open */
++			console_lock(); /* required by vt_in_use() */
+ 			for (i = 0, mask = 2; i < MAX_NR_CONSOLES && mask;
+ 							++i, mask <<= 1)
+-				if (VT_IS_IN_USE(i))
++				if (vt_in_use(i))
+ 					state |= mask;
++			console_unlock();
+ 			ret = put_user(state, &vtstat->v_state);
+ 		}
+ 		break;
+@@ -659,10 +668,11 @@ int vt_ioctl(struct tty_struct *tty,
+ 	 * Returns the first available (non-opened) console.
+ 	 */
+ 	case VT_OPENQRY:
+-		/* FIXME: locking ? - but then this is a stupid API */
++		console_lock(); /* required by vt_in_use() */
+ 		for (i = 0; i < MAX_NR_CONSOLES; ++i)
+-			if (! VT_IS_IN_USE(i))
++			if (!vt_in_use(i))
+ 				break;
++		console_unlock();
+ 		uival = i < MAX_NR_CONSOLES ? (i+1) : -1;
+ 		goto setint;		 
+ 
+@@ -1011,12 +1021,12 @@ int vt_ioctl(struct tty_struct *tty,
+ 	case VT_LOCKSWITCH:
+ 		if (!capable(CAP_SYS_TTY_CONFIG))
+ 			return -EPERM;
+-		vt_dont_switch = 1;
++		vt_dont_switch = true;
+ 		break;
+ 	case VT_UNLOCKSWITCH:
+ 		if (!capable(CAP_SYS_TTY_CONFIG))
+ 			return -EPERM;
+-		vt_dont_switch = 0;
++		vt_dont_switch = false;
+ 		break;
+ 	case VT_GETHIFONTMASK:
+ 		ret = put_user(vc->vc_hi_font_mask,
+@@ -1180,14 +1190,9 @@ long vt_compat_ioctl(struct tty_struct *tty,
+ {
+ 	struct vc_data *vc = tty->driver_data;
+ 	struct console_font_op op;	/* used in multiple places here */
+-	unsigned int console = vc->vc_num;
+ 	void __user *up = compat_ptr(arg);
+ 	int perm;
+ 
+-
+-	if (!vc_cons_allocated(console)) 	/* impossible? */
+-		return -ENOIOCTLCMD;
+-
+ 	/*
+ 	 * To have permissions to do most of the vt ioctls, we either have
+ 	 * to be the owner of the tty, or have CAP_SYS_TTY_CONFIG.
+diff --git a/include/linux/selection.h b/include/linux/selection.h
+index e2c1f96bf059..5b890ef5b59f 100644
+--- a/include/linux/selection.h
++++ b/include/linux/selection.h
+@@ -11,8 +11,8 @@
+ #include <linux/tiocl.h>
+ #include <linux/vt_buffer.h>
+ 
+-extern struct vc_data *sel_cons;
+ struct tty_struct;
++struct vc_data;
+ 
+ extern void clear_selection(void);
+ extern int set_selection_user(const struct tiocl_selection __user *sel,
+@@ -24,6 +24,8 @@ extern int sel_loadlut(char __user *p);
+ extern int mouse_reporting(void);
+ extern void mouse_report(struct tty_struct * tty, int butt, int mrx, int mry);
+ 
++bool vc_is_sel(struct vc_data *vc);
++
+ extern int console_blanked;
+ 
+ extern const unsigned char color_table[];
+diff --git a/include/linux/vt_kern.h b/include/linux/vt_kern.h
+index 8dc77e40bc03..ded5c48598f3 100644
+--- a/include/linux/vt_kern.h
++++ b/include/linux/vt_kern.h
+@@ -135,7 +135,7 @@ extern int do_unbind_con_driver(const struct consw *csw, int first, int last,
+ 			     int deflt);
+ int vty_init(const struct file_operations *console_fops);
+ 
+-extern char vt_dont_switch;
++extern bool vt_dont_switch;
+ extern int default_utf8;
+ extern int global_cursor_default;
+ 
+diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
+index d9cca6dbd870..efe4c1fc68e5 100644
+--- a/net/mac80211/tx.c
++++ b/net/mac80211/tx.c
+@@ -3610,7 +3610,8 @@ begin:
+ 		 * Drop unicast frames to unauthorised stations unless they are
+ 		 * EAPOL frames from the local station.
+ 		 */
+-		if (unlikely(!ieee80211_vif_is_mesh(&tx.sdata->vif) &&
++		if (unlikely(ieee80211_is_data(hdr->frame_control) &&
++			     !ieee80211_vif_is_mesh(&tx.sdata->vif) &&
+ 			     tx.sdata->vif.type != NL80211_IFTYPE_OCB &&
+ 			     !is_multicast_ether_addr(hdr->addr1) &&
+ 			     !test_sta_flag(tx.sta, WLAN_STA_AUTHORIZED) &&
+diff --git a/tools/testing/selftests/bpf/verifier/jmp32.c b/tools/testing/selftests/bpf/verifier/jmp32.c
+index bd5cae4a7f73..79eeed6029f5 100644
+--- a/tools/testing/selftests/bpf/verifier/jmp32.c
++++ b/tools/testing/selftests/bpf/verifier/jmp32.c
+@@ -783,7 +783,8 @@
+ 	},
+ 	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
+ 	.fixup_map_hash_48b = { 4 },
+-	.result = ACCEPT,
++	.result = REJECT,
++	.errstr = "R8 unbounded memory access",
+ 	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
+ },
+ {
+@@ -811,7 +812,8 @@
+ 	},
+ 	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
+ 	.fixup_map_hash_48b = { 4 },
+-	.result = ACCEPT,
++	.result = REJECT,
++	.errstr = "R8 unbounded memory access",
+ 	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
+ },
+ {
+@@ -839,6 +841,7 @@
+ 	},
+ 	.prog_type = BPF_PROG_TYPE_SCHED_CLS,
+ 	.fixup_map_hash_48b = { 4 },
+-	.result = ACCEPT,
++	.result = REJECT,
++	.errstr = "R8 unbounded memory access",
+ 	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,
+ },


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [gentoo-commits] proj/linux-patches:5.6 commit in: /
@ 2020-04-02 11:37 Mike Pagano
  0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2020-04-02 11:37 UTC (permalink / raw
  To: gentoo-commits

commit:     2baa4ecd9251329a07e5a42573f5c6fcee0941da
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Apr  2 11:37:09 2020 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Apr  2 11:37:09 2020 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=2baa4ecd

Removal of redundant patch

Removed: 2400_mac80211-iwlwifi-authentication-fix.patch

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                    |  4 ---
 2400_mac80211-iwlwifi-authentication-fix.patch | 34 --------------------------
 2 files changed, 38 deletions(-)

diff --git a/0000_README b/0000_README
index 63c1a01..df41c26 100644
--- a/0000_README
+++ b/0000_README
@@ -63,10 +63,6 @@ Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758
 
-Patch:  2400_mac80211-iwlwifi-authentication-fix.patch
-From:   https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net.git/patch/?id=be8c827f50a0bcd56361b31ada11dc0a3c2fd240
-Desc:   mac80211: fix authentication with iwlwifi/mvm
-
 Patch:  2600_enable-key-swapping-for-apple-mac.patch
 From:   https://github.com/free5lot/hid-apple-patched
 Desc:   This hid-apple patch enables swapping of the FN and left Control keys and some additional on some apple keyboards. See bug #622902

diff --git a/2400_mac80211-iwlwifi-authentication-fix.patch b/2400_mac80211-iwlwifi-authentication-fix.patch
deleted file mode 100644
index 87f14d3..0000000
--- a/2400_mac80211-iwlwifi-authentication-fix.patch
+++ /dev/null
@@ -1,34 +0,0 @@
-From be8c827f50a0bcd56361b31ada11dc0a3c2fd240 Mon Sep 17 00:00:00 2001
-From: Johannes Berg <johannes.berg@intel.com>
-Date: Sun, 29 Mar 2020 22:50:06 +0200
-Subject: mac80211: fix authentication with iwlwifi/mvm
-
-The original patch didn't copy the ieee80211_is_data() condition
-because on most drivers the management frames don't go through
-this path. However, they do on iwlwifi/mvm, so we do need to keep
-the condition here.
-
-Cc: stable@vger.kernel.org
-Fixes: ce2e1ca70307 ("mac80211: Check port authorization in the ieee80211_tx_dequeue() case")
-Signed-off-by: Johannes Berg <johannes.berg@intel.com>
-Signed-off-by: David S. Miller <davem@davemloft.net>
----
- net/mac80211/tx.c | 3 ++-
- 1 file changed, 2 insertions(+), 1 deletion(-)
-
-diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
-index d9cca6dbd870..efe4c1fc68e5 100644
---- a/net/mac80211/tx.c
-+++ b/net/mac80211/tx.c
-@@ -3610,7 +3610,8 @@ begin:
- 		 * Drop unicast frames to unauthorised stations unless they are
- 		 * EAPOL frames from the local station.
- 		 */
--		if (unlikely(!ieee80211_vif_is_mesh(&tx.sdata->vif) &&
-+		if (unlikely(ieee80211_is_data(hdr->frame_control) &&
-+			     !ieee80211_vif_is_mesh(&tx.sdata->vif) &&
- 			     tx.sdata->vif.type != NL80211_IFTYPE_OCB &&
- 			     !is_multicast_ether_addr(hdr->addr1) &&
- 			     !test_sta_flag(tx.sta, WLAN_STA_AUTHORIZED) &&
--- 
-cgit 1.2-0.3.lf.el7


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [gentoo-commits] proj/linux-patches:5.6 commit in: /
@ 2020-04-08 12:45 Mike Pagano
  0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2020-04-08 12:45 UTC (permalink / raw
  To: gentoo-commits

commit:     6db4a39e6c9ff189546c40e4f1404cd1b497420c
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Apr  8 12:45:07 2020 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Apr  8 12:45:07 2020 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=6db4a39e

Linux patch 5.6.3

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README            |   4 +
 1002_linux-5.6.3.patch | 901 +++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 905 insertions(+)

diff --git a/0000_README b/0000_README
index df41c26..abd4b3d 100644
--- a/0000_README
+++ b/0000_README
@@ -51,6 +51,10 @@ Patch:  1001_linux-5.6.2.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.6.2
 
+Patch:  1002_linux-5.6.3.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.6.3
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1002_linux-5.6.3.patch b/1002_linux-5.6.3.patch
new file mode 100644
index 0000000..29df01a
--- /dev/null
+++ b/1002_linux-5.6.3.patch
@@ -0,0 +1,901 @@
+diff --git a/Makefile b/Makefile
+index 680b2d52405f..41aafb394d25 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 6
+-SUBLEVEL = 2
++SUBLEVEL = 3
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+ 
+diff --git a/drivers/extcon/extcon-axp288.c b/drivers/extcon/extcon-axp288.c
+index a7f216191493..710a3bb66e95 100644
+--- a/drivers/extcon/extcon-axp288.c
++++ b/drivers/extcon/extcon-axp288.c
+@@ -443,9 +443,40 @@ static int axp288_extcon_probe(struct platform_device *pdev)
+ 	/* Start charger cable type detection */
+ 	axp288_extcon_enable(info);
+ 
++	device_init_wakeup(dev, true);
++	platform_set_drvdata(pdev, info);
++
++	return 0;
++}
++
++static int __maybe_unused axp288_extcon_suspend(struct device *dev)
++{
++	struct axp288_extcon_info *info = dev_get_drvdata(dev);
++
++	if (device_may_wakeup(dev))
++		enable_irq_wake(info->irq[VBUS_RISING_IRQ]);
++
+ 	return 0;
+ }
+ 
++static int __maybe_unused axp288_extcon_resume(struct device *dev)
++{
++	struct axp288_extcon_info *info = dev_get_drvdata(dev);
++
++	/*
++	 * Wakeup when a charger is connected to do charger-type
++	 * connection and generate an extcon event which makes the
++	 * axp288 charger driver set the input current limit.
++	 */
++	if (device_may_wakeup(dev))
++		disable_irq_wake(info->irq[VBUS_RISING_IRQ]);
++
++	return 0;
++}
++
++static SIMPLE_DEV_PM_OPS(axp288_extcon_pm_ops, axp288_extcon_suspend,
++			 axp288_extcon_resume);
++
+ static const struct platform_device_id axp288_extcon_table[] = {
+ 	{ .name = "axp288_extcon" },
+ 	{},
+@@ -457,6 +488,7 @@ static struct platform_driver axp288_extcon_driver = {
+ 	.id_table = axp288_extcon_table,
+ 	.driver = {
+ 		.name = "axp288_extcon",
++		.pm = &axp288_extcon_pm_ops,
+ 	},
+ };
+ 
+diff --git a/drivers/gpu/drm/bridge/analogix/analogix-anx6345.c b/drivers/gpu/drm/bridge/analogix/analogix-anx6345.c
+index 2dfa2fd2a23b..526507102c1e 100644
+--- a/drivers/gpu/drm/bridge/analogix/analogix-anx6345.c
++++ b/drivers/gpu/drm/bridge/analogix/analogix-anx6345.c
+@@ -711,14 +711,14 @@ static int anx6345_i2c_probe(struct i2c_client *client,
+ 		DRM_DEBUG("No panel found\n");
+ 
+ 	/* 1.2V digital core power regulator  */
+-	anx6345->dvdd12 = devm_regulator_get(dev, "dvdd12-supply");
++	anx6345->dvdd12 = devm_regulator_get(dev, "dvdd12");
+ 	if (IS_ERR(anx6345->dvdd12)) {
+ 		DRM_ERROR("dvdd12-supply not found\n");
+ 		return PTR_ERR(anx6345->dvdd12);
+ 	}
+ 
+ 	/* 2.5V digital core power regulator  */
+-	anx6345->dvdd25 = devm_regulator_get(dev, "dvdd25-supply");
++	anx6345->dvdd25 = devm_regulator_get(dev, "dvdd25");
+ 	if (IS_ERR(anx6345->dvdd25)) {
+ 		DRM_ERROR("dvdd25-supply not found\n");
+ 		return PTR_ERR(anx6345->dvdd25);
+diff --git a/drivers/gpu/drm/i915/display/intel_display.c b/drivers/gpu/drm/i915/display/intel_display.c
+index aa453953908b..732db609c897 100644
+--- a/drivers/gpu/drm/i915/display/intel_display.c
++++ b/drivers/gpu/drm/i915/display/intel_display.c
+@@ -14582,8 +14582,8 @@ static int intel_atomic_check(struct drm_device *dev,
+ 	/* Catch I915_MODE_FLAG_INHERITED */
+ 	for_each_oldnew_intel_crtc_in_state(state, crtc, old_crtc_state,
+ 					    new_crtc_state, i) {
+-		if (new_crtc_state->hw.mode.private_flags !=
+-		    old_crtc_state->hw.mode.private_flags)
++		if (new_crtc_state->uapi.mode.private_flags !=
++		    old_crtc_state->uapi.mode.private_flags)
+ 			new_crtc_state->uapi.mode_changed = true;
+ 	}
+ 
+diff --git a/drivers/md/dm.c b/drivers/md/dm.c
+index 0413018c8305..df13fdebe21f 100644
+--- a/drivers/md/dm.c
++++ b/drivers/md/dm.c
+@@ -1739,8 +1739,9 @@ static blk_qc_t dm_process_bio(struct mapped_device *md,
+ 	 * won't be imposed.
+ 	 */
+ 	if (current->bio_list) {
+-		blk_queue_split(md->queue, &bio);
+-		if (!is_abnormal_io(bio))
++		if (is_abnormal_io(bio))
++			blk_queue_split(md->queue, &bio);
++		else
+ 			dm_queue_split(md, ti, &bio);
+ 	}
+ 
+diff --git a/drivers/misc/cardreader/rts5227.c b/drivers/misc/cardreader/rts5227.c
+index 423fecc19fc4..3a9467aaa435 100644
+--- a/drivers/misc/cardreader/rts5227.c
++++ b/drivers/misc/cardreader/rts5227.c
+@@ -394,6 +394,7 @@ static const struct pcr_ops rts522a_pcr_ops = {
+ void rts522a_init_params(struct rtsx_pcr *pcr)
+ {
+ 	rts5227_init_params(pcr);
++	pcr->ops = &rts522a_pcr_ops;
+ 	pcr->tx_initial_phase = SET_CLOCK_PHASE(20, 20, 11);
+ 	pcr->reg_pm_ctrl3 = RTS522A_PM_CTRL3;
+ 
+diff --git a/drivers/misc/mei/hw-me-regs.h b/drivers/misc/mei/hw-me-regs.h
+index 87a0201ba6b3..5213eacc8b86 100644
+--- a/drivers/misc/mei/hw-me-regs.h
++++ b/drivers/misc/mei/hw-me-regs.h
+@@ -87,6 +87,8 @@
+ #define MEI_DEV_ID_CMP_H      0x06e0  /* Comet Lake H */
+ #define MEI_DEV_ID_CMP_H_3    0x06e4  /* Comet Lake H 3 (iTouch) */
+ 
++#define MEI_DEV_ID_CDF        0x18D3  /* Cedar Fork */
++
+ #define MEI_DEV_ID_ICP_LP     0x34E0  /* Ice Lake Point LP */
+ 
+ #define MEI_DEV_ID_JSP_N      0x4DE0  /* Jasper Lake Point N */
+diff --git a/drivers/misc/mei/pci-me.c b/drivers/misc/mei/pci-me.c
+index 2711451b3d87..90ee4484a80a 100644
+--- a/drivers/misc/mei/pci-me.c
++++ b/drivers/misc/mei/pci-me.c
+@@ -111,6 +111,8 @@ static const struct pci_device_id mei_me_pci_tbl[] = {
+ 	{MEI_PCI_DEVICE(MEI_DEV_ID_MCC, MEI_ME_PCH15_CFG)},
+ 	{MEI_PCI_DEVICE(MEI_DEV_ID_MCC_4, MEI_ME_PCH8_CFG)},
+ 
++	{MEI_PCI_DEVICE(MEI_DEV_ID_CDF, MEI_ME_PCH8_CFG)},
++
+ 	/* required last entry */
+ 	{0, }
+ };
+diff --git a/drivers/misc/pci_endpoint_test.c b/drivers/misc/pci_endpoint_test.c
+index a5e317073d95..32e9f267d84f 100644
+--- a/drivers/misc/pci_endpoint_test.c
++++ b/drivers/misc/pci_endpoint_test.c
+@@ -98,6 +98,7 @@ struct pci_endpoint_test {
+ 	struct completion irq_raised;
+ 	int		last_irq;
+ 	int		num_irqs;
++	int		irq_type;
+ 	/* mutex to protect the ioctls */
+ 	struct mutex	mutex;
+ 	struct miscdevice miscdev;
+@@ -157,6 +158,7 @@ static void pci_endpoint_test_free_irq_vectors(struct pci_endpoint_test *test)
+ 	struct pci_dev *pdev = test->pdev;
+ 
+ 	pci_free_irq_vectors(pdev);
++	test->irq_type = IRQ_TYPE_UNDEFINED;
+ }
+ 
+ static bool pci_endpoint_test_alloc_irq_vectors(struct pci_endpoint_test *test,
+@@ -191,6 +193,8 @@ static bool pci_endpoint_test_alloc_irq_vectors(struct pci_endpoint_test *test,
+ 		irq = 0;
+ 		res = false;
+ 	}
++
++	test->irq_type = type;
+ 	test->num_irqs = irq;
+ 
+ 	return res;
+@@ -330,6 +334,7 @@ static bool pci_endpoint_test_copy(struct pci_endpoint_test *test, size_t size)
+ 	dma_addr_t orig_dst_phys_addr;
+ 	size_t offset;
+ 	size_t alignment = test->alignment;
++	int irq_type = test->irq_type;
+ 	u32 src_crc32;
+ 	u32 dst_crc32;
+ 
+@@ -426,6 +431,7 @@ static bool pci_endpoint_test_write(struct pci_endpoint_test *test, size_t size)
+ 	dma_addr_t orig_phys_addr;
+ 	size_t offset;
+ 	size_t alignment = test->alignment;
++	int irq_type = test->irq_type;
+ 	u32 crc32;
+ 
+ 	if (size > SIZE_MAX - alignment)
+@@ -494,6 +500,7 @@ static bool pci_endpoint_test_read(struct pci_endpoint_test *test, size_t size)
+ 	dma_addr_t orig_phys_addr;
+ 	size_t offset;
+ 	size_t alignment = test->alignment;
++	int irq_type = test->irq_type;
+ 	u32 crc32;
+ 
+ 	if (size > SIZE_MAX - alignment)
+@@ -555,7 +562,7 @@ static bool pci_endpoint_test_set_irq(struct pci_endpoint_test *test,
+ 		return false;
+ 	}
+ 
+-	if (irq_type == req_irq_type)
++	if (test->irq_type == req_irq_type)
+ 		return true;
+ 
+ 	pci_endpoint_test_release_irq(test);
+@@ -567,12 +574,10 @@ static bool pci_endpoint_test_set_irq(struct pci_endpoint_test *test,
+ 	if (!pci_endpoint_test_request_irq(test))
+ 		goto err;
+ 
+-	irq_type = req_irq_type;
+ 	return true;
+ 
+ err:
+ 	pci_endpoint_test_free_irq_vectors(test);
+-	irq_type = IRQ_TYPE_UNDEFINED;
+ 	return false;
+ }
+ 
+@@ -633,7 +638,7 @@ static int pci_endpoint_test_probe(struct pci_dev *pdev,
+ {
+ 	int err;
+ 	int id;
+-	char name[20];
++	char name[24];
+ 	enum pci_barno bar;
+ 	void __iomem *base;
+ 	struct device *dev = &pdev->dev;
+@@ -652,6 +657,7 @@ static int pci_endpoint_test_probe(struct pci_dev *pdev,
+ 	test->test_reg_bar = 0;
+ 	test->alignment = 0;
+ 	test->pdev = pdev;
++	test->irq_type = IRQ_TYPE_UNDEFINED;
+ 
+ 	if (no_msi)
+ 		irq_type = IRQ_TYPE_LEGACY;
+diff --git a/drivers/net/dsa/microchip/Kconfig b/drivers/net/dsa/microchip/Kconfig
+index 1d7870c6df3c..4ec6a47b7f72 100644
+--- a/drivers/net/dsa/microchip/Kconfig
++++ b/drivers/net/dsa/microchip/Kconfig
+@@ -1,5 +1,6 @@
+ # SPDX-License-Identifier: GPL-2.0-only
+ config NET_DSA_MICROCHIP_KSZ_COMMON
++	select NET_DSA_TAG_KSZ
+ 	tristate
+ 
+ menuconfig NET_DSA_MICROCHIP_KSZ9477
+diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
+index 2c28da1737fe..b3a51935e8e0 100644
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -724,6 +724,9 @@ static int macb_mdiobus_register(struct macb *bp)
+ {
+ 	struct device_node *child, *np = bp->pdev->dev.of_node;
+ 
++	if (of_phy_is_fixed_link(np))
++		return mdiobus_register(bp->mii_bus);
++
+ 	/* Only create the PHY from the device tree if at least one PHY is
+ 	 * described. Otherwise scan the entire MDIO bus. We do this to support
+ 	 * old device tree that did not follow the best practices and did not
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
+index f9047db6a11d..3a08252f1a53 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c
+@@ -1938,6 +1938,8 @@ static uint brcmf_sdio_readframes(struct brcmf_sdio *bus, uint maxframes)
+ 			if (brcmf_sdio_hdparse(bus, bus->rxhdr, &rd_new,
+ 					       BRCMF_SDIO_FT_NORMAL)) {
+ 				rd->len = 0;
++				brcmf_sdio_rxfail(bus, true, true);
++				sdio_release_host(bus->sdiodev->func1);
+ 				brcmu_pkt_buf_free_skb(pkt);
+ 				continue;
+ 			}
+diff --git a/drivers/nvmem/core.c b/drivers/nvmem/core.c
+index ef326f243f36..5f1988498d75 100644
+--- a/drivers/nvmem/core.c
++++ b/drivers/nvmem/core.c
+@@ -72,6 +72,7 @@ static void nvmem_release(struct device *dev)
+ 	struct nvmem_device *nvmem = to_nvmem_device(dev);
+ 
+ 	ida_simple_remove(&nvmem_ida, nvmem->id);
++	gpiod_put(nvmem->wp_gpio);
+ 	kfree(nvmem);
+ }
+ 
+diff --git a/drivers/nvmem/nvmem-sysfs.c b/drivers/nvmem/nvmem-sysfs.c
+index 9e0c429cd08a..8759c4470012 100644
+--- a/drivers/nvmem/nvmem-sysfs.c
++++ b/drivers/nvmem/nvmem-sysfs.c
+@@ -56,6 +56,9 @@ static ssize_t bin_attr_nvmem_read(struct file *filp, struct kobject *kobj,
+ 
+ 	count = round_down(count, nvmem->word_size);
+ 
++	if (!nvmem->reg_read)
++		return -EPERM;
++
+ 	rc = nvmem->reg_read(nvmem->priv, pos, buf, count);
+ 
+ 	if (rc)
+@@ -90,6 +93,9 @@ static ssize_t bin_attr_nvmem_write(struct file *filp, struct kobject *kobj,
+ 
+ 	count = round_down(count, nvmem->word_size);
+ 
++	if (!nvmem->reg_write)
++		return -EPERM;
++
+ 	rc = nvmem->reg_write(nvmem->priv, pos, buf, count);
+ 
+ 	if (rc)
+diff --git a/drivers/nvmem/sprd-efuse.c b/drivers/nvmem/sprd-efuse.c
+index 2f1e0fbd1901..7a189ef52333 100644
+--- a/drivers/nvmem/sprd-efuse.c
++++ b/drivers/nvmem/sprd-efuse.c
+@@ -239,7 +239,7 @@ static int sprd_efuse_raw_prog(struct sprd_efuse *efuse, u32 blk, bool doub,
+ 		ret = -EBUSY;
+ 	} else {
+ 		sprd_efuse_set_prog_lock(efuse, lock);
+-		writel(*data, efuse->base + SPRD_EFUSE_MEM(blk));
++		writel(0, efuse->base + SPRD_EFUSE_MEM(blk));
+ 		sprd_efuse_set_prog_lock(efuse, false);
+ 	}
+ 
+diff --git a/drivers/pci/pci-sysfs.c b/drivers/pci/pci-sysfs.c
+index 13f766db0684..335dd6fbf039 100644
+--- a/drivers/pci/pci-sysfs.c
++++ b/drivers/pci/pci-sysfs.c
+@@ -464,7 +464,8 @@ static ssize_t dev_rescan_store(struct device *dev,
+ 	}
+ 	return count;
+ }
+-static DEVICE_ATTR_WO(dev_rescan);
++static struct device_attribute dev_attr_dev_rescan = __ATTR(rescan, 0200, NULL,
++							    dev_rescan_store);
+ 
+ static ssize_t remove_store(struct device *dev, struct device_attribute *attr,
+ 			    const char *buf, size_t count)
+@@ -501,7 +502,8 @@ static ssize_t bus_rescan_store(struct device *dev,
+ 	}
+ 	return count;
+ }
+-static DEVICE_ATTR_WO(bus_rescan);
++static struct device_attribute dev_attr_bus_rescan = __ATTR(rescan, 0200, NULL,
++							    bus_rescan_store);
+ 
+ #if defined(CONFIG_PM) && defined(CONFIG_ACPI)
+ static ssize_t d3cold_allowed_store(struct device *dev,
+diff --git a/drivers/power/supply/axp288_charger.c b/drivers/power/supply/axp288_charger.c
+index 1bbba6bba673..cf4c67b2d235 100644
+--- a/drivers/power/supply/axp288_charger.c
++++ b/drivers/power/supply/axp288_charger.c
+@@ -21,6 +21,7 @@
+ #include <linux/property.h>
+ #include <linux/mfd/axp20x.h>
+ #include <linux/extcon.h>
++#include <linux/dmi.h>
+ 
+ #define PS_STAT_VBUS_TRIGGER		BIT(0)
+ #define PS_STAT_BAT_CHRG_DIR		BIT(2)
+@@ -545,6 +546,49 @@ out:
+ 	return IRQ_HANDLED;
+ }
+ 
++/*
++ * The HP Pavilion x2 10 series comes in a number of variants:
++ * Bay Trail SoC    + AXP288 PMIC, DMI_BOARD_NAME: "815D"
++ * Cherry Trail SoC + AXP288 PMIC, DMI_BOARD_NAME: "813E"
++ * Cherry Trail SoC + TI PMIC,     DMI_BOARD_NAME: "827C" or "82F4"
++ *
++ * The variants with the AXP288 PMIC are all kinds of special:
++ *
++ * 1. All variants use a Type-C connector which the AXP288 does not support, so
++ * when using a Type-C charger it is not recognized. Unlike most AXP288 devices,
++ * this model actually has mostly working ACPI AC / Battery code, the ACPI code
++ * "solves" this by simply setting the input_current_limit to 3A.
++ * There are still some issues with the ACPI code, so we use this native driver,
++ * and to solve the charging not working (500mA is not enough) issue we hardcode
++ * the 3A input_current_limit like the ACPI code does.
++ *
++ * 2. If no charger is connected the machine boots with the vbus-path disabled.
++ * Normally this is done when a 5V boost converter is active to avoid the PMIC
++ * trying to charge from the 5V boost converter's output. This is done when
++ * an OTG host cable is inserted and the ID pin on the micro-B receptacle is
++ * pulled low and the ID pin has an ACPI event handler associated with it
++ * which re-enables the vbus-path when the ID pin is pulled high when the
++ * OTG host cable is removed. The Type-C connector has no ID pin, there is
++ * no ID pin handler and there appears to be no 5V boost converter, so we
++ * end up not charging because the vbus-path is disabled, until we unplug
++ * the charger which automatically clears the vbus-path disable bit and then
++ * on the second plug-in of the adapter we start charging. To solve the not
++ * charging on first charger plugin we unconditionally enable the vbus-path at
++ * probe on this model, which is safe since there is no 5V boost converter.
++ */
++static const struct dmi_system_id axp288_hp_x2_dmi_ids[] = {
++	{
++		/*
++		 * Bay Trail model has "Hewlett-Packard" as sys_vendor, Cherry
++		 * Trail model has "HP", so we only match on product_name.
++		 */
++		.matches = {
++			DMI_MATCH(DMI_PRODUCT_NAME, "HP Pavilion x2 Detachable"),
++		},
++	},
++	{} /* Terminating entry */
++};
++
+ static void axp288_charger_extcon_evt_worker(struct work_struct *work)
+ {
+ 	struct axp288_chrg_info *info =
+@@ -568,7 +612,11 @@ static void axp288_charger_extcon_evt_worker(struct work_struct *work)
+ 	}
+ 
+ 	/* Determine cable/charger type */
+-	if (extcon_get_state(edev, EXTCON_CHG_USB_SDP) > 0) {
++	if (dmi_check_system(axp288_hp_x2_dmi_ids)) {
++		/* See comment above axp288_hp_x2_dmi_ids declaration */
++		dev_dbg(&info->pdev->dev, "HP X2 with Type-C, setting inlmt to 3A\n");
++		current_limit = 3000000;
++	} else if (extcon_get_state(edev, EXTCON_CHG_USB_SDP) > 0) {
+ 		dev_dbg(&info->pdev->dev, "USB SDP charger is connected\n");
+ 		current_limit = 500000;
+ 	} else if (extcon_get_state(edev, EXTCON_CHG_USB_CDP) > 0) {
+@@ -685,6 +733,13 @@ static int charger_init_hw_regs(struct axp288_chrg_info *info)
+ 		return ret;
+ 	}
+ 
++	if (dmi_check_system(axp288_hp_x2_dmi_ids)) {
++		/* See comment above axp288_hp_x2_dmi_ids declaration */
++		ret = axp288_charger_vbus_path_select(info, true);
++		if (ret < 0)
++			return ret;
++	}
++
+ 	/* Read current charge voltage and current limit */
+ 	ret = regmap_read(info->regmap, AXP20X_CHRG_CTRL1, &val);
+ 	if (ret < 0) {
+diff --git a/drivers/soc/mediatek/mtk-cmdq-helper.c b/drivers/soc/mediatek/mtk-cmdq-helper.c
+index de20e6cba83b..db37144ae98c 100644
+--- a/drivers/soc/mediatek/mtk-cmdq-helper.c
++++ b/drivers/soc/mediatek/mtk-cmdq-helper.c
+@@ -78,6 +78,7 @@ struct cmdq_client *cmdq_mbox_create(struct device *dev, int index, u32 timeout)
+ 	client->pkt_cnt = 0;
+ 	client->client.dev = dev;
+ 	client->client.tx_block = false;
++	client->client.knows_txdone = true;
+ 	client->chan = mbox_request_channel(&client->client, index);
+ 
+ 	if (IS_ERR(client->chan)) {
+diff --git a/include/uapi/linux/coresight-stm.h b/include/uapi/linux/coresight-stm.h
+index aac550a52f80..8847dbf24151 100644
+--- a/include/uapi/linux/coresight-stm.h
++++ b/include/uapi/linux/coresight-stm.h
+@@ -2,8 +2,10 @@
+ #ifndef __UAPI_CORESIGHT_STM_H_
+ #define __UAPI_CORESIGHT_STM_H_
+ 
+-#define STM_FLAG_TIMESTAMPED   BIT(3)
+-#define STM_FLAG_GUARANTEED    BIT(7)
++#include <linux/const.h>
++
++#define STM_FLAG_TIMESTAMPED   _BITUL(3)
++#define STM_FLAG_GUARANTEED    _BITUL(7)
+ 
+ /*
+  * The CoreSight STM supports guaranteed and invariant timing
+diff --git a/include/uapi/sound/asoc.h b/include/uapi/sound/asoc.h
+index 6048553c119d..a74ca232f1fc 100644
+--- a/include/uapi/sound/asoc.h
++++ b/include/uapi/sound/asoc.h
+@@ -17,6 +17,7 @@
+ #define __LINUX_UAPI_SND_ASOC_H
+ 
+ #include <linux/types.h>
++#include <sound/asound.h>
+ 
+ /*
+  * Maximum number of channels topology kcontrol can represent.
+diff --git a/kernel/padata.c b/kernel/padata.c
+index 72777c10bb9c..62082597d4a2 100644
+--- a/kernel/padata.c
++++ b/kernel/padata.c
+@@ -512,7 +512,7 @@ static int padata_replace_one(struct padata_shell *ps)
+ static int padata_replace(struct padata_instance *pinst)
+ {
+ 	struct padata_shell *ps;
+-	int err;
++	int err = 0;
+ 
+ 	pinst->flags |= PADATA_RESET;
+ 
+diff --git a/lib/test_xarray.c b/lib/test_xarray.c
+index 55c14e8c8859..8c7d7a8468b8 100644
+--- a/lib/test_xarray.c
++++ b/lib/test_xarray.c
+@@ -12,6 +12,9 @@
+ static unsigned int tests_run;
+ static unsigned int tests_passed;
+ 
++static const unsigned int order_limit =
++		IS_ENABLED(CONFIG_XARRAY_MULTI) ? BITS_PER_LONG : 1;
++
+ #ifndef XA_DEBUG
+ # ifdef __KERNEL__
+ void xa_dump(const struct xarray *xa) { }
+@@ -959,6 +962,20 @@ static noinline void check_multi_find_2(struct xarray *xa)
+ 	}
+ }
+ 
++static noinline void check_multi_find_3(struct xarray *xa)
++{
++	unsigned int order;
++
++	for (order = 5; order < order_limit; order++) {
++		unsigned long index = 1UL << (order - 5);
++
++		XA_BUG_ON(xa, !xa_empty(xa));
++		xa_store_order(xa, 0, order - 4, xa_mk_index(0), GFP_KERNEL);
++		XA_BUG_ON(xa, xa_find_after(xa, &index, ULONG_MAX, XA_PRESENT));
++		xa_erase_index(xa, 0);
++	}
++}
++
+ static noinline void check_find_1(struct xarray *xa)
+ {
+ 	unsigned long i, j, k;
+@@ -1081,6 +1098,7 @@ static noinline void check_find(struct xarray *xa)
+ 	for (i = 2; i < 10; i++)
+ 		check_multi_find_1(xa, i);
+ 	check_multi_find_2(xa);
++	check_multi_find_3(xa);
+ }
+ 
+ /* See find_swap_entry() in mm/shmem.c */
+diff --git a/lib/xarray.c b/lib/xarray.c
+index 1d9fab7db8da..acd1fad2e862 100644
+--- a/lib/xarray.c
++++ b/lib/xarray.c
+@@ -1839,7 +1839,8 @@ static bool xas_sibling(struct xa_state *xas)
+ 	if (!node)
+ 		return false;
+ 	mask = (XA_CHUNK_SIZE << node->shift) - 1;
+-	return (xas->xa_index & mask) > (xas->xa_offset << node->shift);
++	return (xas->xa_index & mask) >
++		((unsigned long)xas->xa_offset << node->shift);
+ }
+ 
+ /**
+diff --git a/mm/mempolicy.c b/mm/mempolicy.c
+index 977c641f78cf..f93b52bf6ffc 100644
+--- a/mm/mempolicy.c
++++ b/mm/mempolicy.c
+@@ -2841,7 +2841,9 @@ int mpol_parse_str(char *str, struct mempolicy **mpol)
+ 	switch (mode) {
+ 	case MPOL_PREFERRED:
+ 		/*
+-		 * Insist on a nodelist of one node only
++		 * Insist on a nodelist of one node only, although later
++		 * we use first_node(nodes) to grab a single node, so here
++		 * nodelist (or nodes) cannot be empty.
+ 		 */
+ 		if (nodelist) {
+ 			char *rest = nodelist;
+@@ -2849,6 +2851,8 @@ int mpol_parse_str(char *str, struct mempolicy **mpol)
+ 				rest++;
+ 			if (*rest)
+ 				goto out;
++			if (nodes_empty(nodes))
++				goto out;
+ 		}
+ 		break;
+ 	case MPOL_INTERLEAVE:
+diff --git a/net/core/skbuff.c b/net/core/skbuff.c
+index e1101a4f90a6..bea447f38dcc 100644
+--- a/net/core/skbuff.c
++++ b/net/core/skbuff.c
+@@ -3668,6 +3668,7 @@ struct sk_buff *skb_segment_list(struct sk_buff *skb,
+ 
+ 		skb_push(nskb, -skb_network_offset(nskb) + offset);
+ 
++		skb_release_head_state(nskb);
+ 		 __copy_skb_header(nskb, skb);
+ 
+ 		skb_headers_offset_update(nskb, skb_headroom(nskb) - skb_headroom(skb));
+diff --git a/net/ipv4/fib_trie.c b/net/ipv4/fib_trie.c
+index ff0c24371e33..3be0affbabd3 100644
+--- a/net/ipv4/fib_trie.c
++++ b/net/ipv4/fib_trie.c
+@@ -2577,6 +2577,7 @@ static int fib_triestat_seq_show(struct seq_file *seq, void *v)
+ 		   " %zd bytes, size of tnode: %zd bytes.\n",
+ 		   LEAF_SIZE, TNODE_SIZE(0));
+ 
++	rcu_read_lock();
+ 	for (h = 0; h < FIB_TABLE_HASHSZ; h++) {
+ 		struct hlist_head *head = &net->ipv4.fib_table_hash[h];
+ 		struct fib_table *tb;
+@@ -2596,7 +2597,9 @@ static int fib_triestat_seq_show(struct seq_file *seq, void *v)
+ 			trie_show_usage(seq, t->stats);
+ #endif
+ 		}
++		cond_resched_rcu();
+ 	}
++	rcu_read_unlock();
+ 
+ 	return 0;
+ }
+diff --git a/net/ipv4/ip_tunnel.c b/net/ipv4/ip_tunnel.c
+index 74e1d964a615..cd4b84310d92 100644
+--- a/net/ipv4/ip_tunnel.c
++++ b/net/ipv4/ip_tunnel.c
+@@ -142,11 +142,8 @@ struct ip_tunnel *ip_tunnel_lookup(struct ip_tunnel_net *itn,
+ 			cand = t;
+ 	}
+ 
+-	if (flags & TUNNEL_NO_KEY)
+-		goto skip_key_lookup;
+-
+ 	hlist_for_each_entry_rcu(t, head, hash_node) {
+-		if (t->parms.i_key != key ||
++		if ((!(flags & TUNNEL_NO_KEY) && t->parms.i_key != key) ||
+ 		    t->parms.iph.saddr != 0 ||
+ 		    t->parms.iph.daddr != 0 ||
+ 		    !(t->dev->flags & IFF_UP))
+@@ -158,7 +155,6 @@ struct ip_tunnel *ip_tunnel_lookup(struct ip_tunnel_net *itn,
+ 			cand = t;
+ 	}
+ 
+-skip_key_lookup:
+ 	if (cand)
+ 		return cand;
+ 
+diff --git a/net/ipv4/udp_offload.c b/net/ipv4/udp_offload.c
+index 1a98583a79f4..e67a66fbf27b 100644
+--- a/net/ipv4/udp_offload.c
++++ b/net/ipv4/udp_offload.c
+@@ -453,6 +453,7 @@ struct sk_buff *udp_gro_receive(struct list_head *head, struct sk_buff *skb,
+ 	unsigned int off = skb_gro_offset(skb);
+ 	int flush = 1;
+ 
++	NAPI_GRO_CB(skb)->is_flist = 0;
+ 	if (skb->dev->features & NETIF_F_GRO_FRAGLIST)
+ 		NAPI_GRO_CB(skb)->is_flist = sk ? !udp_sk(sk)->gro_enabled: 1;
+ 
+diff --git a/net/sctp/ipv6.c b/net/sctp/ipv6.c
+index bc734cfaa29e..c87af430107a 100644
+--- a/net/sctp/ipv6.c
++++ b/net/sctp/ipv6.c
+@@ -228,7 +228,8 @@ static void sctp_v6_get_dst(struct sctp_transport *t, union sctp_addr *saddr,
+ {
+ 	struct sctp_association *asoc = t->asoc;
+ 	struct dst_entry *dst = NULL;
+-	struct flowi6 *fl6 = &fl->u.ip6;
++	struct flowi _fl;
++	struct flowi6 *fl6 = &_fl.u.ip6;
+ 	struct sctp_bind_addr *bp;
+ 	struct ipv6_pinfo *np = inet6_sk(sk);
+ 	struct sctp_sockaddr_entry *laddr;
+@@ -238,7 +239,7 @@ static void sctp_v6_get_dst(struct sctp_transport *t, union sctp_addr *saddr,
+ 	enum sctp_scope scope;
+ 	__u8 matchlen = 0;
+ 
+-	memset(fl6, 0, sizeof(struct flowi6));
++	memset(&_fl, 0, sizeof(_fl));
+ 	fl6->daddr = daddr->v6.sin6_addr;
+ 	fl6->fl6_dport = daddr->v6.sin6_port;
+ 	fl6->flowi6_proto = IPPROTO_SCTP;
+@@ -276,8 +277,11 @@ static void sctp_v6_get_dst(struct sctp_transport *t, union sctp_addr *saddr,
+ 	rcu_read_unlock();
+ 
+ 	dst = ip6_dst_lookup_flow(sock_net(sk), sk, fl6, final_p);
+-	if (!asoc || saddr)
++	if (!asoc || saddr) {
++		t->dst = dst;
++		memcpy(fl, &_fl, sizeof(_fl));
+ 		goto out;
++	}
+ 
+ 	bp = &asoc->base.bind_addr;
+ 	scope = sctp_scope(daddr);
+@@ -300,6 +304,8 @@ static void sctp_v6_get_dst(struct sctp_transport *t, union sctp_addr *saddr,
+ 			if ((laddr->a.sa.sa_family == AF_INET6) &&
+ 			    (sctp_v6_cmp_addr(&dst_saddr, &laddr->a))) {
+ 				rcu_read_unlock();
++				t->dst = dst;
++				memcpy(fl, &_fl, sizeof(_fl));
+ 				goto out;
+ 			}
+ 		}
+@@ -338,6 +344,8 @@ static void sctp_v6_get_dst(struct sctp_transport *t, union sctp_addr *saddr,
+ 			if (!IS_ERR_OR_NULL(dst))
+ 				dst_release(dst);
+ 			dst = bdst;
++			t->dst = dst;
++			memcpy(fl, &_fl, sizeof(_fl));
+ 			break;
+ 		}
+ 
+@@ -351,6 +359,8 @@ static void sctp_v6_get_dst(struct sctp_transport *t, union sctp_addr *saddr,
+ 			dst_release(dst);
+ 		dst = bdst;
+ 		matchlen = bmatchlen;
++		t->dst = dst;
++		memcpy(fl, &_fl, sizeof(_fl));
+ 	}
+ 	rcu_read_unlock();
+ 
+@@ -359,14 +369,12 @@ out:
+ 		struct rt6_info *rt;
+ 
+ 		rt = (struct rt6_info *)dst;
+-		t->dst = dst;
+ 		t->dst_cookie = rt6_get_cookie(rt);
+ 		pr_debug("rt6_dst:%pI6/%d rt6_src:%pI6\n",
+ 			 &rt->rt6i_dst.addr, rt->rt6i_dst.plen,
+-			 &fl6->saddr);
++			 &fl->u.ip6.saddr);
+ 	} else {
+ 		t->dst = NULL;
+-
+ 		pr_debug("no route\n");
+ 	}
+ }
+diff --git a/net/sctp/protocol.c b/net/sctp/protocol.c
+index 78af2fcf90cc..092d1afdee0d 100644
+--- a/net/sctp/protocol.c
++++ b/net/sctp/protocol.c
+@@ -409,7 +409,8 @@ static void sctp_v4_get_dst(struct sctp_transport *t, union sctp_addr *saddr,
+ {
+ 	struct sctp_association *asoc = t->asoc;
+ 	struct rtable *rt;
+-	struct flowi4 *fl4 = &fl->u.ip4;
++	struct flowi _fl;
++	struct flowi4 *fl4 = &_fl.u.ip4;
+ 	struct sctp_bind_addr *bp;
+ 	struct sctp_sockaddr_entry *laddr;
+ 	struct dst_entry *dst = NULL;
+@@ -419,7 +420,7 @@ static void sctp_v4_get_dst(struct sctp_transport *t, union sctp_addr *saddr,
+ 
+ 	if (t->dscp & SCTP_DSCP_SET_MASK)
+ 		tos = t->dscp & SCTP_DSCP_VAL_MASK;
+-	memset(fl4, 0x0, sizeof(struct flowi4));
++	memset(&_fl, 0x0, sizeof(_fl));
+ 	fl4->daddr  = daddr->v4.sin_addr.s_addr;
+ 	fl4->fl4_dport = daddr->v4.sin_port;
+ 	fl4->flowi4_proto = IPPROTO_SCTP;
+@@ -438,8 +439,11 @@ static void sctp_v4_get_dst(struct sctp_transport *t, union sctp_addr *saddr,
+ 		 &fl4->saddr);
+ 
+ 	rt = ip_route_output_key(sock_net(sk), fl4);
+-	if (!IS_ERR(rt))
++	if (!IS_ERR(rt)) {
+ 		dst = &rt->dst;
++		t->dst = dst;
++		memcpy(fl, &_fl, sizeof(_fl));
++	}
+ 
+ 	/* If there is no association or if a source address is passed, no
+ 	 * more validation is required.
+@@ -502,27 +506,33 @@ static void sctp_v4_get_dst(struct sctp_transport *t, union sctp_addr *saddr,
+ 		odev = __ip_dev_find(sock_net(sk), laddr->a.v4.sin_addr.s_addr,
+ 				     false);
+ 		if (!odev || odev->ifindex != fl4->flowi4_oif) {
+-			if (!dst)
++			if (!dst) {
+ 				dst = &rt->dst;
+-			else
++				t->dst = dst;
++				memcpy(fl, &_fl, sizeof(_fl));
++			} else {
+ 				dst_release(&rt->dst);
++			}
+ 			continue;
+ 		}
+ 
+ 		dst_release(dst);
+ 		dst = &rt->dst;
++		t->dst = dst;
++		memcpy(fl, &_fl, sizeof(_fl));
+ 		break;
+ 	}
+ 
+ out_unlock:
+ 	rcu_read_unlock();
+ out:
+-	t->dst = dst;
+-	if (dst)
++	if (dst) {
+ 		pr_debug("rt_dst:%pI4, rt_src:%pI4\n",
+-			 &fl4->daddr, &fl4->saddr);
+-	else
++			 &fl->u.ip4.daddr, &fl->u.ip4.saddr);
++	} else {
++		t->dst = NULL;
+ 		pr_debug("no route\n");
++	}
+ }
+ 
+ /* For v4, the source address is cached in the route entry(dst). So no need
+diff --git a/net/sctp/socket.c b/net/sctp/socket.c
+index 1b56fc440606..757740115e93 100644
+--- a/net/sctp/socket.c
++++ b/net/sctp/socket.c
+@@ -147,29 +147,44 @@ static void sctp_clear_owner_w(struct sctp_chunk *chunk)
+ 	skb_orphan(chunk->skb);
+ }
+ 
++#define traverse_and_process()	\
++do {				\
++	msg = chunk->msg;	\
++	if (msg == prev_msg)	\
++		continue;	\
++	list_for_each_entry(c, &msg->chunks, frag_list) {	\
++		if ((clear && asoc->base.sk == c->skb->sk) ||	\
++		    (!clear && asoc->base.sk != c->skb->sk))	\
++			cb(c);	\
++	}			\
++	prev_msg = msg;		\
++} while (0)
++
+ static void sctp_for_each_tx_datachunk(struct sctp_association *asoc,
++				       bool clear,
+ 				       void (*cb)(struct sctp_chunk *))
+ 
+ {
++	struct sctp_datamsg *msg, *prev_msg = NULL;
+ 	struct sctp_outq *q = &asoc->outqueue;
++	struct sctp_chunk *chunk, *c;
+ 	struct sctp_transport *t;
+-	struct sctp_chunk *chunk;
+ 
+ 	list_for_each_entry(t, &asoc->peer.transport_addr_list, transports)
+ 		list_for_each_entry(chunk, &t->transmitted, transmitted_list)
+-			cb(chunk);
++			traverse_and_process();
+ 
+ 	list_for_each_entry(chunk, &q->retransmit, transmitted_list)
+-		cb(chunk);
++		traverse_and_process();
+ 
+ 	list_for_each_entry(chunk, &q->sacked, transmitted_list)
+-		cb(chunk);
++		traverse_and_process();
+ 
+ 	list_for_each_entry(chunk, &q->abandoned, transmitted_list)
+-		cb(chunk);
++		traverse_and_process();
+ 
+ 	list_for_each_entry(chunk, &q->out_chunk_list, list)
+-		cb(chunk);
++		traverse_and_process();
+ }
+ 
+ static void sctp_for_each_rx_skb(struct sctp_association *asoc, struct sock *sk,
+@@ -9574,9 +9589,9 @@ static int sctp_sock_migrate(struct sock *oldsk, struct sock *newsk,
+ 	 * paths won't try to lock it and then oldsk.
+ 	 */
+ 	lock_sock_nested(newsk, SINGLE_DEPTH_NESTING);
+-	sctp_for_each_tx_datachunk(assoc, sctp_clear_owner_w);
++	sctp_for_each_tx_datachunk(assoc, true, sctp_clear_owner_w);
+ 	sctp_assoc_migrate(assoc, newsk);
+-	sctp_for_each_tx_datachunk(assoc, sctp_set_owner_w);
++	sctp_for_each_tx_datachunk(assoc, false, sctp_set_owner_w);
+ 
+ 	/* If the association on the newsk is already closed before accept()
+ 	 * is called, set RCV_SHUTDOWN flag.
+diff --git a/sound/pci/hda/patch_ca0132.c b/sound/pci/hda/patch_ca0132.c
+index ded8bc07d755..10223e080d59 100644
+--- a/sound/pci/hda/patch_ca0132.c
++++ b/sound/pci/hda/patch_ca0132.c
+@@ -1180,6 +1180,7 @@ static const struct snd_pci_quirk ca0132_quirks[] = {
+ 	SND_PCI_QUIRK(0x1458, 0xA016, "Recon3Di", QUIRK_R3DI),
+ 	SND_PCI_QUIRK(0x1458, 0xA026, "Gigabyte G1.Sniper Z97", QUIRK_R3DI),
+ 	SND_PCI_QUIRK(0x1458, 0xA036, "Gigabyte GA-Z170X-Gaming 7", QUIRK_R3DI),
++	SND_PCI_QUIRK(0x3842, 0x1038, "EVGA X99 Classified", QUIRK_R3DI),
+ 	SND_PCI_QUIRK(0x1102, 0x0013, "Recon3D", QUIRK_R3D),
+ 	SND_PCI_QUIRK(0x1102, 0x0051, "Sound Blaster AE-5", QUIRK_AE5),
+ 	{}
+diff --git a/tools/perf/util/setup.py b/tools/perf/util/setup.py
+index 8a065a6f9713..347b2c0789e4 100644
+--- a/tools/perf/util/setup.py
++++ b/tools/perf/util/setup.py
+@@ -3,7 +3,7 @@ from subprocess import Popen, PIPE
+ from re import sub
+ 
+ cc = getenv("CC")
+-cc_is_clang = b"clang version" in Popen([cc, "-v"], stderr=PIPE).stderr.readline()
++cc_is_clang = b"clang version" in Popen([cc.split()[0], "-v"], stderr=PIPE).stderr.readline()
+ 
+ def clang_has_option(option):
+     return [o for o in Popen([cc, option], stderr=PIPE).stderr.readlines() if b"unknown argument" in o] == [ ]


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [gentoo-commits] proj/linux-patches:5.6 commit in: /
@ 2020-04-08 17:39 Mike Pagano
  0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2020-04-08 17:39 UTC (permalink / raw
  To: gentoo-commits

commit:     1925b0520f1735eb1c30313f518c521fc5478adf
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Apr  8 17:37:44 2020 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Apr  8 17:37:44 2020 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=1925b052

Add support for ZSTD-compressed kernel and initramfs (use=experimental)

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                        |  32 ++
 ..._ZSTD-v4-1-8-prepare-zstd-for-preboot-env.patch |  82 ++++
 ...STD-v4-2-8-prepare-xxhash-for-preboot-env.patch |  94 +++++
 ...STD-v4-3-8-add-zstd-support-to-decompress.patch | 422 +++++++++++++++++++++
 ...-v4-4-8-add-support-for-zstd-compres-kern.patch |  65 ++++
 ...add-support-for-zstd-compressed-initramfs.patch |  48 +++
 ..._ZSTD-v4-6-8-bump-ZO-z-extra-bytes-margin.patch |  20 +
 ...v4-7-8-support-for-ZSTD-compressed-kernel.patch |  92 +++++
 ...4-8-8-gitignore-add-ZSTD-compressed-files.patch |  12 +
 9 files changed, 867 insertions(+)

diff --git a/0000_README b/0000_README
index abd4b3d..7af0186 100644
--- a/0000_README
+++ b/0000_README
@@ -79,6 +79,38 @@ Patch:  4567_distro-Gentoo-Kconfig.patch
 From:   Tom Wijsman <TomWij@gentoo.org>
 Desc:   Add Gentoo Linux support config settings and defaults.
 
+Patch: 	5000_ZSTD-v4-1-8-prepare-zstd-for-preboot-env.patch
+From:   https://lkml.org/lkml/2020/4/1/29
+Desc:   lib: prepare zstd for preboot environment
+
+Patch:  5001_ZSTD-v4-2-8-prepare-xxhash-for-preboot-env.patch
+From:   https://lkml.org/lkml/2020/4/1/29
+Desc:   lib: prepare xxhash for preboot environment
+
+Patch:  5002_ZSTD-v4-3-8-add-zstd-support-to-decompress.patch
+From:   https://lkml.org/lkml/2020/4/1/29
+Desc:   lib: add zstd support to decompress
+
+Patch:  5003_ZSTD-v4-4-8-add-support-for-zstd-compres-kern.patch
+From:   https://lkml.org/lkml/2020/4/1/29
+Desc:   init: add support for zstd compressed kernel
+
+Patch:  5004_ZSTD-v4-5-8-add-support-for-zstd-compressed-initramfs.patch
+From:   https://lkml.org/lkml/2020/4/1/29
+Desc:   usr: add support for zstd compressed initramfs
+
+Patch:  5005_ZSTD-v4-6-8-bump-ZO-z-extra-bytes-margin.patch
+From:   https://lkml.org/lkml/2020/4/1/29
+Desc:   x86: bump ZO_z_extra_bytes margin for zstd
+
+Patch:  5006_ZSTD-v4-7-8-support-for-ZSTD-compressed-kernel.patch
+From:   https://lkml.org/lkml/2020/4/1/29
+Desc:   x86: Add support for ZSTD compressed kernel
+
+Patch:  5007_ZSTD-v4-8-8-gitignore-add-ZSTD-compressed-files.patch
+From:   https://lkml.org/lkml/2020/4/1/29
+Desc:   .gitignore: add ZSTD-compressed files
+
 Patch:  5012_enable-cpu-optimizations-for-gcc91.patch
 From:   https://github.com/graysky2/kernel_gcc_patch/
 Desc:   Kernel patch enables gcc >= v9.1 optimizations for additional CPUs.

diff --git a/5000_ZSTD-v4-1-8-prepare-zstd-for-preboot-env.patch b/5000_ZSTD-v4-1-8-prepare-zstd-for-preboot-env.patch
new file mode 100644
index 0000000..297a8d4
--- /dev/null
+++ b/5000_ZSTD-v4-1-8-prepare-zstd-for-preboot-env.patch
@@ -0,0 +1,82 @@
+diff --git a/lib/zstd/decompress.c b/lib/zstd/decompress.c
+index 269ee9a796c1..73ded63278cf 100644
+--- a/lib/zstd/decompress.c
++++ b/lib/zstd/decompress.c
+@@ -2490,6 +2490,7 @@ size_t ZSTD_decompressStream(ZSTD_DStream *zds, ZSTD_outBuffer *output, ZSTD_inB
+ 	}
+ }
+ 
++#ifndef ZSTD_PREBOOT
+ EXPORT_SYMBOL(ZSTD_DCtxWorkspaceBound);
+ EXPORT_SYMBOL(ZSTD_initDCtx);
+ EXPORT_SYMBOL(ZSTD_decompressDCtx);
+@@ -2529,3 +2530,4 @@ EXPORT_SYMBOL(ZSTD_insertBlock);
+ 
+ MODULE_LICENSE("Dual BSD/GPL");
+ MODULE_DESCRIPTION("Zstd Decompressor");
++#endif
+diff --git a/lib/zstd/fse_decompress.c b/lib/zstd/fse_decompress.c
+index a84300e5a013..0b353530fb3f 100644
+--- a/lib/zstd/fse_decompress.c
++++ b/lib/zstd/fse_decompress.c
+@@ -47,6 +47,7 @@
+ ****************************************************************/
+ #include "bitstream.h"
+ #include "fse.h"
++#include "zstd_internal.h"
+ #include <linux/compiler.h>
+ #include <linux/kernel.h>
+ #include <linux/string.h> /* memcpy, memset */
+@@ -60,14 +61,6 @@
+ 		enum { FSE_static_assert = 1 / (int)(!!(c)) }; \
+ 	} /* use only *after* variable declarations */
+ 
+-/* check and forward error code */
+-#define CHECK_F(f)                  \
+-	{                           \
+-		size_t const e = f; \
+-		if (FSE_isError(e)) \
+-			return e;   \
+-	}
+-
+ /* **************************************************************
+ *  Templates
+ ****************************************************************/
+diff --git a/lib/zstd/zstd_internal.h b/lib/zstd/zstd_internal.h
+index 1a79fab9e13a..dac753397f86 100644
+--- a/lib/zstd/zstd_internal.h
++++ b/lib/zstd/zstd_internal.h
+@@ -127,7 +127,14 @@ static const U32 OF_defaultNormLog = OF_DEFAULTNORMLOG;
+ *  Shared functions to include for inlining
+ *********************************************/
+ ZSTD_STATIC void ZSTD_copy8(void *dst, const void *src) {
+-	memcpy(dst, src, 8);
++	/*
++	 * zstd relies heavily on gcc being able to analyze and inline this
++	 * memcpy() call, since it is called in a tight loop. Preboot mode
++	 * is compiled in freestanding mode, which stops gcc from analyzing
++	 * memcpy(). Use __builtin_memcpy() to tell gcc to analyze this as a
++	 * regular memcpy().
++	 */
++	__builtin_memcpy(dst, src, 8);
+ }
+ /*! ZSTD_wildcopy() :
+ *   custom version of memcpy(), can copy up to 7 bytes too many (8 bytes if length==0) */
+@@ -137,13 +144,16 @@ ZSTD_STATIC void ZSTD_wildcopy(void *dst, const void *src, ptrdiff_t length)
+ 	const BYTE* ip = (const BYTE*)src;
+ 	BYTE* op = (BYTE*)dst;
+ 	BYTE* const oend = op + length;
+-	/* Work around https://gcc.gnu.org/bugzilla/show_bug.cgi?id=81388.
++#if defined(GCC_VERSION) && GCC_VERSION >= 70000 && GCC_VERSION < 70200
++	/*
++	 * Work around https://gcc.gnu.org/bugzilla/show_bug.cgi?id=81388.
+ 	 * Avoid the bad case where the loop only runs once by handling the
+ 	 * special case separately. This doesn't trigger the bug because it
+ 	 * doesn't involve pointer/integer overflow.
+ 	 */
+ 	if (length <= 8)
+ 		return ZSTD_copy8(dst, src);
++#endif
+ 	do {
+ 		ZSTD_copy8(op, ip);
+ 		op += 8;

diff --git a/5001_ZSTD-v4-2-8-prepare-xxhash-for-preboot-env.patch b/5001_ZSTD-v4-2-8-prepare-xxhash-for-preboot-env.patch
new file mode 100644
index 0000000..88e4674
--- /dev/null
+++ b/5001_ZSTD-v4-2-8-prepare-xxhash-for-preboot-env.patch
@@ -0,0 +1,94 @@
+diff --git a/lib/xxhash.c b/lib/xxhash.c
+index aa61e2a3802f..b4364e011392 100644
+--- a/lib/xxhash.c
++++ b/lib/xxhash.c
+@@ -80,13 +80,11 @@ void xxh32_copy_state(struct xxh32_state *dst, const struct xxh32_state *src)
+ {
+ 	memcpy(dst, src, sizeof(*dst));
+ }
+-EXPORT_SYMBOL(xxh32_copy_state);
+ 
+ void xxh64_copy_state(struct xxh64_state *dst, const struct xxh64_state *src)
+ {
+ 	memcpy(dst, src, sizeof(*dst));
+ }
+-EXPORT_SYMBOL(xxh64_copy_state);
+ 
+ /*-***************************
+  * Simple Hash Functions
+@@ -151,7 +149,6 @@ uint32_t xxh32(const void *input, const size_t len, const uint32_t seed)
+ 
+ 	return h32;
+ }
+-EXPORT_SYMBOL(xxh32);
+ 
+ static uint64_t xxh64_round(uint64_t acc, const uint64_t input)
+ {
+@@ -234,7 +231,6 @@ uint64_t xxh64(const void *input, const size_t len, const uint64_t seed)
+ 
+ 	return h64;
+ }
+-EXPORT_SYMBOL(xxh64);
+ 
+ /*-**************************************************
+  * Advanced Hash Functions
+@@ -251,7 +247,6 @@ void xxh32_reset(struct xxh32_state *statePtr, const uint32_t seed)
+ 	state.v4 = seed - PRIME32_1;
+ 	memcpy(statePtr, &state, sizeof(state));
+ }
+-EXPORT_SYMBOL(xxh32_reset);
+ 
+ void xxh64_reset(struct xxh64_state *statePtr, const uint64_t seed)
+ {
+@@ -265,7 +260,6 @@ void xxh64_reset(struct xxh64_state *statePtr, const uint64_t seed)
+ 	state.v4 = seed - PRIME64_1;
+ 	memcpy(statePtr, &state, sizeof(state));
+ }
+-EXPORT_SYMBOL(xxh64_reset);
+ 
+ int xxh32_update(struct xxh32_state *state, const void *input, const size_t len)
+ {
+@@ -334,7 +328,6 @@ int xxh32_update(struct xxh32_state *state, const void *input, const size_t len)
+ 
+ 	return 0;
+ }
+-EXPORT_SYMBOL(xxh32_update);
+ 
+ uint32_t xxh32_digest(const struct xxh32_state *state)
+ {
+@@ -372,7 +365,6 @@ uint32_t xxh32_digest(const struct xxh32_state *state)
+ 
+ 	return h32;
+ }
+-EXPORT_SYMBOL(xxh32_digest);
+ 
+ int xxh64_update(struct xxh64_state *state, const void *input, const size_t len)
+ {
+@@ -439,7 +431,6 @@ int xxh64_update(struct xxh64_state *state, const void *input, const size_t len)
+ 
+ 	return 0;
+ }
+-EXPORT_SYMBOL(xxh64_update);
+ 
+ uint64_t xxh64_digest(const struct xxh64_state *state)
+ {
+@@ -494,7 +485,19 @@ uint64_t xxh64_digest(const struct xxh64_state *state)
+ 
+ 	return h64;
+ }
++
++#ifndef XXH_PREBOOT
++EXPORT_SYMBOL(xxh32_copy_state);
++EXPORT_SYMBOL(xxh64_copy_state);
++EXPORT_SYMBOL(xxh32);
++EXPORT_SYMBOL(xxh64);
++EXPORT_SYMBOL(xxh32_reset);
++EXPORT_SYMBOL(xxh64_reset);
++EXPORT_SYMBOL(xxh32_update);
++EXPORT_SYMBOL(xxh32_digest);
++EXPORT_SYMBOL(xxh64_update);
+ EXPORT_SYMBOL(xxh64_digest);
+ 
+ MODULE_LICENSE("Dual BSD/GPL");
+ MODULE_DESCRIPTION("xxHash");
++#endif

diff --git a/5002_ZSTD-v4-3-8-add-zstd-support-to-decompress.patch b/5002_ZSTD-v4-3-8-add-zstd-support-to-decompress.patch
new file mode 100644
index 0000000..4f11460
--- /dev/null
+++ b/5002_ZSTD-v4-3-8-add-zstd-support-to-decompress.patch
@@ -0,0 +1,422 @@
+diff --git a/include/linux/decompress/unzstd.h b/include/linux/decompress/unzstd.h
+new file mode 100644
+index 000000000000..56d539ae880f
+--- /dev/null
++++ b/include/linux/decompress/unzstd.h
+@@ -0,0 +1,11 @@
++/* SPDX-License-Identifier: GPL-2.0 */
++#ifndef LINUX_DECOMPRESS_UNZSTD_H
++#define LINUX_DECOMPRESS_UNZSTD_H
++
++int unzstd(unsigned char *inbuf, long len,
++	   long (*fill)(void*, unsigned long),
++	   long (*flush)(void*, unsigned long),
++	   unsigned char *output,
++	   long *pos,
++	   void (*error_fn)(char *x));
++#endif
+diff --git a/lib/Kconfig b/lib/Kconfig
+index bc7e56370129..11de5fa09a52 100644
+--- a/lib/Kconfig
++++ b/lib/Kconfig
+@@ -336,6 +336,10 @@ config DECOMPRESS_LZ4
+ 	select LZ4_DECOMPRESS
+ 	tristate
+ 
++config DECOMPRESS_ZSTD
++	select ZSTD_DECOMPRESS
++	tristate
++
+ #
+ # Generic allocator support is selected if needed
+ #
+diff --git a/lib/Makefile b/lib/Makefile
+index 611872c06926..09ad45ba6883 100644
+--- a/lib/Makefile
++++ b/lib/Makefile
+@@ -160,6 +160,7 @@ lib-$(CONFIG_DECOMPRESS_LZMA) += decompress_unlzma.o
+ lib-$(CONFIG_DECOMPRESS_XZ) += decompress_unxz.o
+ lib-$(CONFIG_DECOMPRESS_LZO) += decompress_unlzo.o
+ lib-$(CONFIG_DECOMPRESS_LZ4) += decompress_unlz4.o
++lib-$(CONFIG_DECOMPRESS_ZSTD) += decompress_unzstd.o
+ 
+ obj-$(CONFIG_TEXTSEARCH) += textsearch.o
+ obj-$(CONFIG_TEXTSEARCH_KMP) += ts_kmp.o
+diff --git a/lib/decompress.c b/lib/decompress.c
+index 857ab1af1ef3..ab3fc90ffc64 100644
+--- a/lib/decompress.c
++++ b/lib/decompress.c
+@@ -13,6 +13,7 @@
+ #include <linux/decompress/inflate.h>
+ #include <linux/decompress/unlzo.h>
+ #include <linux/decompress/unlz4.h>
++#include <linux/decompress/unzstd.h>
+ 
+ #include <linux/types.h>
+ #include <linux/string.h>
+@@ -37,6 +38,9 @@
+ #ifndef CONFIG_DECOMPRESS_LZ4
+ # define unlz4 NULL
+ #endif
++#ifndef CONFIG_DECOMPRESS_ZSTD
++# define unzstd NULL
++#endif
+ 
+ struct compress_format {
+ 	unsigned char magic[2];
+@@ -52,6 +56,7 @@ static const struct compress_format compressed_formats[] __initconst = {
+ 	{ {0xfd, 0x37}, "xz", unxz },
+ 	{ {0x89, 0x4c}, "lzo", unlzo },
+ 	{ {0x02, 0x21}, "lz4", unlz4 },
++	{ {0x28, 0xb5}, "zstd", unzstd },
+ 	{ {0, 0}, NULL, NULL }
+ };
+ 
+diff --git a/lib/decompress_unzstd.c b/lib/decompress_unzstd.c
+new file mode 100644
+index 000000000000..f317afab502f
+--- /dev/null
++++ b/lib/decompress_unzstd.c
+@@ -0,0 +1,342 @@
++// SPDX-License-Identifier: GPL-2.0
++
++/*
++ * Important notes about in-place decompression
++ *
++ * At least on x86, the kernel is decompressed in place: the compressed data
++ * is placed to the end of the output buffer, and the decompressor overwrites
++ * most of the compressed data. There must be enough safety margin to
++ * guarantee that the write position is always behind the read position.
++ *
++ * The safety margin for ZSTD with a 128 KB block size is calculated below.
++ * Note that the margin with ZSTD is bigger than with GZIP or XZ!
++ *
++ * The worst case for in-place decompression is that the beginning of
++ * the file is compressed extremely well, and the rest of the file is
++ * uncompressible. Thus, we must look for worst-case expansion when the
++ * compressor is encoding uncompressible data.
++ *
++ * The structure of the .zst file in case of a compresed kernel is as follows.
++ * Maximum sizes (as bytes) of the fields are in parenthesis.
++ *
++ *    Frame Header: (18)
++ *    Blocks: (N)
++ *    Checksum: (4)
++ *
++ * The frame header and checksum overhead is at most 22 bytes.
++ *
++ * ZSTD stores the data in blocks. Each block has a header whose size is
++ * a 3 bytes. After the block header, there is up to 128 KB of payload.
++ * The maximum uncompressed size of the payload is 128 KB. The minimum
++ * uncompressed size of the payload is never less than the payload size
++ * (excluding the block header).
++ *
++ * The assumption, that the uncompressed size of the payload is never
++ * smaller than the payload itself, is valid only when talking about
++ * the payload as a whole. It is possible that the payload has parts where
++ * the decompressor consumes more input than it produces output. Calculating
++ * the worst case for this would be tricky. Instead of trying to do that,
++ * let's simply make sure that the decompressor never overwrites any bytes
++ * of the payload which it is currently reading.
++ *
++ * Now we have enough information to calculate the safety margin. We need
++ *   - 22 bytes for the .zst file format headers;
++ *   - 3 bytes per every 128 KiB of uncompressed size (one block header per
++ *     block); and
++ *   - 128 KiB (biggest possible zstd block size) to make sure that the
++ *     decompressor never overwrites anything from the block it is currently
++ *     reading.
++ *
++ * We get the following formula:
++ *
++ *    safety_margin = 22 + uncompressed_size * 3 / 131072 + 131072
++ *                 <= 22 + (uncompressed_size >> 15) + 131072
++ */
++
++/*
++ * Preboot environments #include "path/to/decompress_unzstd.c".
++ * All of the source files we depend on must be #included.
++ * zstd's only source dependeny is xxhash, which has no source
++ * dependencies.
++ *
++ * zstd and xxhash avoid declaring themselves as modules
++ * when ZSTD_PREBOOT and XXH_PREBOOT are defined.
++ */
++#ifdef STATIC
++# define ZSTD_PREBOOT
++# define XXH_PREBOOT
++# include "xxhash.c"
++# include "zstd/entropy_common.c"
++# include "zstd/fse_decompress.c"
++# include "zstd/huf_decompress.c"
++# include "zstd/zstd_common.c"
++# include "zstd/decompress.c"
++#endif
++
++#include <linux/decompress/mm.h>
++#include <linux/kernel.h>
++#include <linux/zstd.h>
++
++/* 128MB is the maximum window size supported by zstd. */
++#define ZSTD_WINDOWSIZE_MAX	(1 << ZSTD_WINDOWLOG_MAX)
++/* Size of the input and output buffers in multi-call mode.
++ * Pick a larger size because it isn't used during kernel decompression,
++ * since that is single pass, and we have to allocate a large buffer for
++ * zstd's window anyways. The larger size speeds up initramfs decompression.
++ */
++#define ZSTD_IOBUF_SIZE		(1 << 17)
++
++static int INIT handle_zstd_error(size_t ret, void (*error)(char *x))
++{
++	const int err = ZSTD_getErrorCode(ret);
++
++	if (!ZSTD_isError(ret))
++		return 0;
++
++	switch (err) {
++	case ZSTD_error_memory_allocation:
++		error("ZSTD decompressor ran out of memory");
++		break;
++	case ZSTD_error_prefix_unknown:
++		error("Input is not in the ZSTD format (wrong magic bytes)");
++		break;
++	case ZSTD_error_dstSize_tooSmall:
++	case ZSTD_error_corruption_detected:
++	case ZSTD_error_checksum_wrong:
++		error("ZSTD-compressed data is corrupt");
++		break;
++	default:
++		error("ZSTD-compressed data is probably corrupt");
++		break;
++	}
++	return -1;
++}
++
++/*
++ * Handle the case where we have the entire input and output in one segment.
++ * We can allocate less memory (no circular buffer for the sliding window),
++ * and avoid some memcpy() calls.
++ */
++static int INIT decompress_single(const u8 *in_buf, long in_len, u8 *out_buf,
++				  long out_len, long *in_pos,
++				  void (*error)(char *x))
++{
++	const size_t wksp_size = ZSTD_DCtxWorkspaceBound();
++	void *wksp = large_malloc(wksp_size);
++	ZSTD_DCtx *dctx = ZSTD_initDCtx(wksp, wksp_size);
++	int err;
++	size_t ret;
++
++	if (dctx == NULL) {
++		error("Out of memory while allocating ZSTD_DCtx");
++		err = -1;
++		goto out;
++	}
++	/*
++	 * Find out how large the frame actually is, there may be junk at
++	 * the end of the frame that ZSTD_decompressDCtx() can't handle.
++	 */
++	ret = ZSTD_findFrameCompressedSize(in_buf, in_len);
++	err = handle_zstd_error(ret, error);
++	if (err)
++		goto out;
++	in_len = (long)ret;
++
++	ret = ZSTD_decompressDCtx(dctx, out_buf, out_len, in_buf, in_len);
++	err = handle_zstd_error(ret, error);
++	if (err)
++		goto out;
++
++	if (in_pos != NULL)
++		*in_pos = in_len;
++
++	err = 0;
++out:
++	if (wksp != NULL)
++		large_free(wksp);
++	return err;
++}
++
++static int INIT __unzstd(unsigned char *in_buf, long in_len,
++			 long (*fill)(void*, unsigned long),
++			 long (*flush)(void*, unsigned long),
++			 unsigned char *out_buf, long out_len,
++			 long *in_pos,
++			 void (*error)(char *x))
++{
++	ZSTD_inBuffer in;
++	ZSTD_outBuffer out;
++	ZSTD_frameParams params;
++	void *in_allocated = NULL;
++	void *out_allocated = NULL;
++	void *wksp = NULL;
++	size_t wksp_size;
++	ZSTD_DStream *dstream;
++	int err;
++	size_t ret;
++
++	if (out_len == 0)
++		out_len = LONG_MAX; /* no limit */
++
++	if (fill == NULL && flush == NULL)
++		/*
++		 * We can decompress faster and with less memory when we have a
++		 * single chunk.
++		 */
++		return decompress_single(in_buf, in_len, out_buf, out_len,
++					 in_pos, error);
++
++	/*
++	 * If in_buf is not provided, we must be using fill(), so allocate
++	 * a large enough buffer. If it is provided, it must be at least
++	 * ZSTD_IOBUF_SIZE large.
++	 */
++	if (in_buf == NULL) {
++		in_allocated = large_malloc(ZSTD_IOBUF_SIZE);
++		if (in_allocated == NULL) {
++			error("Out of memory while allocating input buffer");
++			err = -1;
++			goto out;
++		}
++		in_buf = in_allocated;
++		in_len = 0;
++	}
++	/* Read the first chunk, since we need to decode the frame header. */
++	if (fill != NULL)
++		in_len = fill(in_buf, ZSTD_IOBUF_SIZE);
++	if (in_len < 0) {
++		error("ZSTD-compressed data is truncated");
++		err = -1;
++		goto out;
++	}
++	/* Set the first non-empty input buffer. */
++	in.src = in_buf;
++	in.pos = 0;
++	in.size = in_len;
++	/* Allocate the output buffer if we are using flush(). */
++	if (flush != NULL) {
++		out_allocated = large_malloc(ZSTD_IOBUF_SIZE);
++		if (out_allocated == NULL) {
++			error("Out of memory while allocating output buffer");
++			err = -1;
++			goto out;
++		}
++		out_buf = out_allocated;
++		out_len = ZSTD_IOBUF_SIZE;
++	}
++	/* Set the output buffer. */
++	out.dst = out_buf;
++	out.pos = 0;
++	out.size = out_len;
++
++	/*
++	 * We need to know the window size to allocate the ZSTD_DStream.
++	 * Since we are streaming, we need to allocate a buffer for the sliding
++	 * window. The window size varies from 1 KB to ZSTD_WINDOWSIZE_MAX
++	 * (8 MB), so it is important to use the actual value so as not to
++	 * waste memory when it is smaller.
++	 */
++	ret = ZSTD_getFrameParams(&params, in.src, in.size);
++	err = handle_zstd_error(ret, error);
++	if (err)
++		goto out;
++	if (ret != 0) {
++		error("ZSTD-compressed data has an incomplete frame header");
++		err = -1;
++		goto out;
++	}
++	if (params.windowSize > ZSTD_WINDOWSIZE_MAX) {
++		error("ZSTD-compressed data has too large a window size");
++		err = -1;
++		goto out;
++	}
++
++	/*
++	 * Allocate the ZSTD_DStream now that we know how much memory is
++	 * required.
++	 */
++	wksp_size = ZSTD_DStreamWorkspaceBound(params.windowSize);
++	wksp = large_malloc(wksp_size);
++	dstream = ZSTD_initDStream(params.windowSize, wksp, wksp_size);
++	if (dstream == NULL) {
++		error("Out of memory while allocating ZSTD_DStream");
++		err = -1;
++		goto out;
++	}
++
++	/*
++	 * Decompression loop:
++	 * Read more data if necessary (error if no more data can be read).
++	 * Call the decompression function, which returns 0 when finished.
++	 * Flush any data produced if using flush().
++	 */
++	if (in_pos != NULL)
++		*in_pos = 0;
++	do {
++		/*
++		 * If we need to reload data, either we have fill() and can
++		 * try to get more data, or we don't and the input is truncated.
++		 */
++		if (in.pos == in.size) {
++			if (in_pos != NULL)
++				*in_pos += in.pos;
++			in_len = fill ? fill(in_buf, ZSTD_IOBUF_SIZE) : -1;
++			if (in_len < 0) {
++				error("ZSTD-compressed data is truncated");
++				err = -1;
++				goto out;
++			}
++			in.pos = 0;
++			in.size = in_len;
++		}
++		/* Returns zero when the frame is complete. */
++		ret = ZSTD_decompressStream(dstream, &out, &in);
++		err = handle_zstd_error(ret, error);
++		if (err)
++			goto out;
++		/* Flush all of the data produced if using flush(). */
++		if (flush != NULL && out.pos > 0) {
++			if (out.pos != flush(out.dst, out.pos)) {
++				error("Failed to flush()");
++				err = -1;
++				goto out;
++			}
++			out.pos = 0;
++		}
++	} while (ret != 0);
++
++	if (in_pos != NULL)
++		*in_pos += in.pos;
++
++	err = 0;
++out:
++	if (in_allocated != NULL)
++		large_free(in_allocated);
++	if (out_allocated != NULL)
++		large_free(out_allocated);
++	if (wksp != NULL)
++		large_free(wksp);
++	return err;
++}
++
++#ifndef ZSTD_PREBOOT
++STATIC int INIT unzstd(unsigned char *buf, long len,
++		       long (*fill)(void*, unsigned long),
++		       long (*flush)(void*, unsigned long),
++		       unsigned char *out_buf,
++		       long *pos,
++		       void (*error)(char *x))
++{
++	return __unzstd(buf, len, fill, flush, out_buf, 0, pos, error);
++}
++#else
++STATIC int INIT __decompress(unsigned char *buf, long len,
++			     long (*fill)(void*, unsigned long),
++			     long (*flush)(void*, unsigned long),
++			     unsigned char *out_buf, long out_len,
++			     long *pos,
++			     void (*error)(char *x))
++{
++	return __unzstd(buf, len, fill, flush, out_buf, out_len, pos, error);
++}
++#endif

diff --git a/5003_ZSTD-v4-4-8-add-support-for-zstd-compres-kern.patch b/5003_ZSTD-v4-4-8-add-support-for-zstd-compres-kern.patch
new file mode 100644
index 0000000..e6598e6
--- /dev/null
+++ b/5003_ZSTD-v4-4-8-add-support-for-zstd-compres-kern.patch
@@ -0,0 +1,65 @@
+diff --git a/init/Kconfig b/init/Kconfig
+index 20a6ac33761c..9b646a25918e 100644
+--- a/init/Kconfig
++++ b/init/Kconfig
+@@ -173,13 +173,16 @@ config HAVE_KERNEL_LZO
+ config HAVE_KERNEL_LZ4
+ 	bool
+ 
++config HAVE_KERNEL_ZSTD
++	bool
++
+ config HAVE_KERNEL_UNCOMPRESSED
+ 	bool
+ 
+ choice
+ 	prompt "Kernel compression mode"
+ 	default KERNEL_GZIP
+-	depends on HAVE_KERNEL_GZIP || HAVE_KERNEL_BZIP2 || HAVE_KERNEL_LZMA || HAVE_KERNEL_XZ || HAVE_KERNEL_LZO || HAVE_KERNEL_LZ4 || HAVE_KERNEL_UNCOMPRESSED
++	depends on HAVE_KERNEL_GZIP || HAVE_KERNEL_BZIP2 || HAVE_KERNEL_LZMA || HAVE_KERNEL_XZ || HAVE_KERNEL_LZO || HAVE_KERNEL_LZ4 || HAVE_KERNEL_ZSTD || HAVE_KERNEL_UNCOMPRESSED
+ 	help
+ 	  The linux kernel is a kind of self-extracting executable.
+ 	  Several compression algorithms are available, which differ
+@@ -258,6 +261,16 @@ config KERNEL_LZ4
+ 	  is about 8% bigger than LZO. But the decompression speed is
+ 	  faster than LZO.
+ 
++config KERNEL_ZSTD
++	bool "ZSTD"
++	depends on HAVE_KERNEL_ZSTD
++	help
++	  ZSTD is a compression algorithm targeting intermediate compression
++	  with fast decompression speed. It will compress better than GZIP and
++	  decompress around the same speed as LZO, but slower than LZ4. You
++	  will need at least 192 KB RAM or more for booting. The zstd command
++	  line tools is required for compression.
++
+ config KERNEL_UNCOMPRESSED
+ 	bool "None"
+ 	depends on HAVE_KERNEL_UNCOMPRESSED
+diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
+index 752ff0a225a9..4b99893efa3d 100644
+--- a/scripts/Makefile.lib
++++ b/scripts/Makefile.lib
+@@ -394,6 +394,21 @@ quiet_cmd_xzkern = XZKERN  $@
+ quiet_cmd_xzmisc = XZMISC  $@
+       cmd_xzmisc = cat $(real-prereqs) | xz --check=crc32 --lzma2=dict=1MiB > $@
+ 
++# ZSTD
++# ---------------------------------------------------------------------------
++# Appends the uncompressed size of the data using size_append. The .zst
++# format has the size information available at the beginning of the file too,
++# but it's in a more complex format and it's good to avoid changing the part
++# of the boot code that reads the uncompressed size.
++# Note that the bytes added by size_append will make the zstd tool think that
++# the file is corrupt. This is expected.
++
++quiet_cmd_zstd = ZSTD    $@
++cmd_zstd = (cat $(filter-out FORCE,$^) | \
++	zstd -19 && \
++        $(call size_append, $(filter-out FORCE,$^))) > $@ || \
++	(rm -f $@ ; false)
++
+ # ASM offsets
+ # ---------------------------------------------------------------------------
+ 

diff --git a/5004_ZSTD-v4-5-8-add-support-for-zstd-compressed-initramfs.patch b/5004_ZSTD-v4-5-8-add-support-for-zstd-compressed-initramfs.patch
new file mode 100644
index 0000000..6054414
--- /dev/null
+++ b/5004_ZSTD-v4-5-8-add-support-for-zstd-compressed-initramfs.patch
@@ -0,0 +1,48 @@
+diff --git a/usr/Kconfig b/usr/Kconfig
+index bdf5bbd40727..43aca37d09b5 100644
+--- a/usr/Kconfig
++++ b/usr/Kconfig
+@@ -100,6 +100,15 @@ config RD_LZ4
+ 	  Support loading of a LZ4 encoded initial ramdisk or cpio buffer
+ 	  If unsure, say N.
+ 
++config RD_ZSTD
++	bool "Support initial ramdisk/ramfs compressed using ZSTD"
++	default y
++	depends on BLK_DEV_INITRD
++	select DECOMPRESS_ZSTD
++	help
++	  Support loading of a ZSTD encoded initial ramdisk or cpio buffer.
++	  If unsure, say N.
++
+ choice
+ 	prompt "Built-in initramfs compression mode"
+ 	depends on INITRAMFS_SOURCE != ""
+@@ -207,4 +216,15 @@ config INITRAMFS_COMPRESSION_LZ4
+ 	  If you choose this, keep in mind that most distros don't provide lz4
+ 	  by default which could cause a build failure.
+ 
++config INITRAMFS_COMPRESSION_ZSTD
++	bool "ZSTD"
++	depends on RD_ZSTD
++	help
++	  ZSTD is a compression algorithm targeting intermediate compression
++	  with fast decompression speed. It will compress better than GZIP and
++	  decompress around the same speed as LZO, but slower than LZ4.
++
++	  If you choose this, keep in mind that you may need to install the zstd
++	  tool to be able to compress the initram.
++
+ endchoice
+diff --git a/usr/Makefile b/usr/Makefile
+index c12e6b15ce72..b1a81a40eab1 100644
+--- a/usr/Makefile
++++ b/usr/Makefile
+@@ -15,6 +15,7 @@ compress-$(CONFIG_INITRAMFS_COMPRESSION_LZMA)	:= lzma
+ compress-$(CONFIG_INITRAMFS_COMPRESSION_XZ)	:= xzmisc
+ compress-$(CONFIG_INITRAMFS_COMPRESSION_LZO)	:= lzo
+ compress-$(CONFIG_INITRAMFS_COMPRESSION_LZ4)	:= lz4
++compress-$(CONFIG_INITRAMFS_COMPRESSION_ZSTD)	:= zstd
+ 
+ obj-$(CONFIG_BLK_DEV_INITRD) := initramfs_data.o
+ 

diff --git a/5005_ZSTD-v4-6-8-bump-ZO-z-extra-bytes-margin.patch b/5005_ZSTD-v4-6-8-bump-ZO-z-extra-bytes-margin.patch
new file mode 100644
index 0000000..b4fd239
--- /dev/null
+++ b/5005_ZSTD-v4-6-8-bump-ZO-z-extra-bytes-margin.patch
@@ -0,0 +1,20 @@
+diff --git a/arch/x86/boot/header.S b/arch/x86/boot/header.S
+index 97d9b6d6c1af..b820875c5c95 100644
+--- a/arch/x86/boot/header.S
++++ b/arch/x86/boot/header.S
+@@ -536,8 +536,14 @@ pref_address:		.quad LOAD_PHYSICAL_ADDR	# preferred load addr
+ # the size-dependent part now grows so fast.
+ #
+ # extra_bytes = (uncompressed_size >> 8) + 65536
++#
++# ZSTD compressed data grows by at most 3 bytes per 128K, and only has a 22
++# byte fixed overhead but has a maximum block size of 128K, so it needs a
++# larger margin.
++#
++# extra_bytes = (uncompressed_size >> 8) + 131072
+ 
+-#define ZO_z_extra_bytes	((ZO_z_output_len >> 8) + 65536)
++#define ZO_z_extra_bytes	((ZO_z_output_len >> 8) + 131072)
+ #if ZO_z_output_len > ZO_z_input_len
+ # define ZO_z_extract_offset	(ZO_z_output_len + ZO_z_extra_bytes - \
+ 				 ZO_z_input_len)

diff --git a/5006_ZSTD-v4-7-8-support-for-ZSTD-compressed-kernel.patch b/5006_ZSTD-v4-7-8-support-for-ZSTD-compressed-kernel.patch
new file mode 100644
index 0000000..5fc8a77
--- /dev/null
+++ b/5006_ZSTD-v4-7-8-support-for-ZSTD-compressed-kernel.patch
@@ -0,0 +1,92 @@
+diff --git a/Documentation/x86/boot.rst b/Documentation/x86/boot.rst
+index c9c201596c3e..cedcf4d49bf0 100644
+--- a/Documentation/x86/boot.rst
++++ b/Documentation/x86/boot.rst
+@@ -786,9 +786,9 @@ Protocol:	2.08+
+   uncompressed data should be determined using the standard magic
+   numbers.  The currently supported compression formats are gzip
+   (magic numbers 1F 8B or 1F 9E), bzip2 (magic number 42 5A), LZMA
+-  (magic number 5D 00), XZ (magic number FD 37), and LZ4 (magic number
+-  02 21).  The uncompressed payload is currently always ELF (magic
+-  number 7F 45 4C 46).
++  (magic number 5D 00), XZ (magic number FD 37), LZ4 (magic number
++  02 21) and ZSTD (magic number 28 B5). The uncompressed payload is
++  currently always ELF (magic number 7F 45 4C 46).
+ 
+ ============	==============
+ Field name:	payload_length
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index beea77046f9b..12d88997a3a6 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -183,6 +183,7 @@ config X86
+ 	select HAVE_KERNEL_LZMA
+ 	select HAVE_KERNEL_LZO
+ 	select HAVE_KERNEL_XZ
++	select HAVE_KERNEL_ZSTD
+ 	select HAVE_KPROBES
+ 	select HAVE_KPROBES_ON_FTRACE
+ 	select HAVE_FUNCTION_ERROR_INJECTION
+diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
+index 26050ae0b27e..8233f598f15b 100644
+--- a/arch/x86/boot/compressed/Makefile
++++ b/arch/x86/boot/compressed/Makefile
+@@ -24,7 +24,7 @@ OBJECT_FILES_NON_STANDARD	:= y
+ KCOV_INSTRUMENT		:= n
+ 
+ targets := vmlinux vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 vmlinux.bin.lzma \
+-	vmlinux.bin.xz vmlinux.bin.lzo vmlinux.bin.lz4
++	vmlinux.bin.xz vmlinux.bin.lzo vmlinux.bin.lz4 vmlinux.bin.zst
+ 
+ KBUILD_CFLAGS := -m$(BITS) -O2
+ KBUILD_CFLAGS += -fno-strict-aliasing $(call cc-option, -fPIE, -fPIC)
+@@ -145,6 +145,8 @@ $(obj)/vmlinux.bin.lzo: $(vmlinux.bin.all-y) FORCE
+ 	$(call if_changed,lzo)
+ $(obj)/vmlinux.bin.lz4: $(vmlinux.bin.all-y) FORCE
+ 	$(call if_changed,lz4)
++$(obj)/vmlinux.bin.zst: $(vmlinux.bin.all-y) FORCE
++	$(call if_changed,zstd)
+ 
+ suffix-$(CONFIG_KERNEL_GZIP)	:= gz
+ suffix-$(CONFIG_KERNEL_BZIP2)	:= bz2
+@@ -152,6 +154,7 @@ suffix-$(CONFIG_KERNEL_LZMA)	:= lzma
+ suffix-$(CONFIG_KERNEL_XZ)	:= xz
+ suffix-$(CONFIG_KERNEL_LZO) 	:= lzo
+ suffix-$(CONFIG_KERNEL_LZ4) 	:= lz4
++suffix-$(CONFIG_KERNEL_ZSTD)	:= zst
+ 
+ quiet_cmd_mkpiggy = MKPIGGY $@
+       cmd_mkpiggy = $(obj)/mkpiggy $< > $@
+diff --git a/arch/x86/boot/compressed/misc.c b/arch/x86/boot/compressed/misc.c
+index 9652d5c2afda..39e592d0e0b4 100644
+--- a/arch/x86/boot/compressed/misc.c
++++ b/arch/x86/boot/compressed/misc.c
+@@ -77,6 +77,10 @@ static int lines, cols;
+ #ifdef CONFIG_KERNEL_LZ4
+ #include "../../../../lib/decompress_unlz4.c"
+ #endif
++
++#ifdef CONFIG_KERNEL_ZSTD
++#include "../../../../lib/decompress_unzstd.c"
++#endif
+ /*
+  * NOTE: When adding a new decompressor, please update the analysis in
+  * ../header.S.
+diff --git a/arch/x86/include/asm/boot.h b/arch/x86/include/asm/boot.h
+index 680c320363db..d6dd43d25d9f 100644
+--- a/arch/x86/include/asm/boot.h
++++ b/arch/x86/include/asm/boot.h
+@@ -24,9 +24,11 @@
+ # error "Invalid value for CONFIG_PHYSICAL_ALIGN"
+ #endif
+ 
+-#ifdef CONFIG_KERNEL_BZIP2
++#if defined(CONFIG_KERNEL_BZIP2)
+ # define BOOT_HEAP_SIZE		0x400000
+-#else /* !CONFIG_KERNEL_BZIP2 */
++#elif defined(CONFIG_KERNEL_ZSTD)
++# define BOOT_HEAP_SIZE		 0x30000
++#else
+ # define BOOT_HEAP_SIZE		 0x10000
+ #endif
+ 

diff --git a/5007_ZSTD-v4-8-8-gitignore-add-ZSTD-compressed-files.patch b/5007_ZSTD-v4-8-8-gitignore-add-ZSTD-compressed-files.patch
new file mode 100644
index 0000000..7506899
--- /dev/null
+++ b/5007_ZSTD-v4-8-8-gitignore-add-ZSTD-compressed-files.patch
@@ -0,0 +1,12 @@
+diff --git a/.gitignore b/.gitignore
+index 72ef86a5570d..edb0191c294f 100644
+--- a/.gitignore
++++ b/.gitignore
+@@ -43,6 +43,7 @@
+ *.tab.[ch]
+ *.tar
+ *.xz
++*.zst
+ Module.symvers
+ modules.builtin
+ modules.order


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [gentoo-commits] proj/linux-patches:5.6 commit in: /
@ 2020-04-12 15:29 Mike Pagano
  0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2020-04-12 15:29 UTC (permalink / raw
  To: gentoo-commits

commit:     a47c371bfe46821952d584e7c0102948682a4602
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Apr 12 15:28:25 2020 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Apr 12 15:28:25 2020 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=a47c371b

Bump ZSTD Patchset to V5

Closes: https://bugs.gentoo.org/716520

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                              | 16 ++++++++--------
 ...> 5000_ZSTD-v5-1-8-prepare-zstd-for-preboot-env.patch |  0
 ...5001_ZSTD-v5-2-8-prepare-xxhash-for-preboot-env.patch |  0
 ...5002_ZSTD-v5-3-8-add-zstd-support-to-decompress.patch |  6 +++---
 ...3_ZSTD-v5-4-8-add-support-for-zstd-compres-kern.patch | 10 +++++-----
 ...5-5-8-add-support-for-zstd-compressed-initramfs.patch |  8 +++++---
 ...> 5005_ZSTD-v5-6-8-bump-ZO-z-extra-bytes-margin.patch |  4 ++--
 ..._ZSTD-v5-7-8-support-for-ZSTD-compressed-kernel.patch | 12 ++++++------
 ...ZSTD-v5-8-8-gitignore-add-ZSTD-compressed-files.patch |  4 ++--
 9 files changed, 31 insertions(+), 29 deletions(-)

diff --git a/0000_README b/0000_README
index 7af0186..458ce4b 100644
--- a/0000_README
+++ b/0000_README
@@ -79,35 +79,35 @@ Patch:  4567_distro-Gentoo-Kconfig.patch
 From:   Tom Wijsman <TomWij@gentoo.org>
 Desc:   Add Gentoo Linux support config settings and defaults.
 
-Patch: 	5000_ZSTD-v4-1-8-prepare-zstd-for-preboot-env.patch
+Patch: 	5000_ZSTD-v5-1-8-prepare-zstd-for-preboot-env.patch
 From:   https://lkml.org/lkml/2020/4/1/29
 Desc:   lib: prepare zstd for preboot environment
 
-Patch:  5001_ZSTD-v4-2-8-prepare-xxhash-for-preboot-env.patch
+Patch:  5001_ZSTD-v5-2-8-prepare-xxhash-for-preboot-env.patch
 From:   https://lkml.org/lkml/2020/4/1/29
 Desc:   lib: prepare xxhash for preboot environment
 
-Patch:  5002_ZSTD-v4-3-8-add-zstd-support-to-decompress.patch
+Patch:  5002_ZSTD-v5-3-8-add-zstd-support-to-decompress.patch
 From:   https://lkml.org/lkml/2020/4/1/29
 Desc:   lib: add zstd support to decompress
 
-Patch:  5003_ZSTD-v4-4-8-add-support-for-zstd-compres-kern.patch
+Patch:  5003_ZSTD-v5-4-8-add-support-for-zstd-compres-kern.patch
 From:   https://lkml.org/lkml/2020/4/1/29
 Desc:   init: add support for zstd compressed kernel
 
-Patch:  5004_ZSTD-v4-5-8-add-support-for-zstd-compressed-initramfs.patch
+Patch:  5004_ZSTD-v5-5-8-add-support-for-zstd-compressed-initramfs.patch
 From:   https://lkml.org/lkml/2020/4/1/29
 Desc:   usr: add support for zstd compressed initramfs
 
-Patch:  5005_ZSTD-v4-6-8-bump-ZO-z-extra-bytes-margin.patch
+Patch:  5005_ZSTD-v5-6-8-bump-ZO-z-extra-bytes-margin.patch
 From:   https://lkml.org/lkml/2020/4/1/29
 Desc:   x86: bump ZO_z_extra_bytes margin for zstd
 
-Patch:  5006_ZSTD-v4-7-8-support-for-ZSTD-compressed-kernel.patch
+Patch:  5006_ZSTD-v5-7-8-support-for-ZSTD-compressed-kernel.patch
 From:   https://lkml.org/lkml/2020/4/1/29
 Desc:   x86: Add support for ZSTD compressed kernel
 
-Patch:  5007_ZSTD-v4-8-8-gitignore-add-ZSTD-compressed-files.patch
+Patch:  5007_ZSTD-v5-8-8-gitignore-add-ZSTD-compressed-files.patch
 From:   https://lkml.org/lkml/2020/4/1/29
 Desc:   .gitignore: add ZSTD-compressed files
 

diff --git a/5000_ZSTD-v4-1-8-prepare-zstd-for-preboot-env.patch b/5000_ZSTD-v5-1-8-prepare-zstd-for-preboot-env.patch
similarity index 100%
rename from 5000_ZSTD-v4-1-8-prepare-zstd-for-preboot-env.patch
rename to 5000_ZSTD-v5-1-8-prepare-zstd-for-preboot-env.patch

diff --git a/5001_ZSTD-v4-2-8-prepare-xxhash-for-preboot-env.patch b/5001_ZSTD-v5-2-8-prepare-xxhash-for-preboot-env.patch
similarity index 100%
rename from 5001_ZSTD-v4-2-8-prepare-xxhash-for-preboot-env.patch
rename to 5001_ZSTD-v5-2-8-prepare-xxhash-for-preboot-env.patch

diff --git a/5002_ZSTD-v4-3-8-add-zstd-support-to-decompress.patch b/5002_ZSTD-v5-3-8-add-zstd-support-to-decompress.patch
similarity index 98%
rename from 5002_ZSTD-v4-3-8-add-zstd-support-to-decompress.patch
rename to 5002_ZSTD-v5-3-8-add-zstd-support-to-decompress.patch
index 4f11460..1c22fa3 100644
--- a/5002_ZSTD-v4-3-8-add-zstd-support-to-decompress.patch
+++ b/5002_ZSTD-v5-3-8-add-zstd-support-to-decompress.patch
@@ -16,7 +16,7 @@ index 000000000000..56d539ae880f
 +	   void (*error_fn)(char *x));
 +#endif
 diff --git a/lib/Kconfig b/lib/Kconfig
-index bc7e56370129..11de5fa09a52 100644
+index 5d53f9609c25..e883aecb9279 100644
 --- a/lib/Kconfig
 +++ b/lib/Kconfig
 @@ -336,6 +336,10 @@ config DECOMPRESS_LZ4
@@ -31,10 +31,10 @@ index bc7e56370129..11de5fa09a52 100644
  # Generic allocator support is selected if needed
  #
 diff --git a/lib/Makefile b/lib/Makefile
-index 611872c06926..09ad45ba6883 100644
+index ab68a8674360..3ce4ac296611 100644
 --- a/lib/Makefile
 +++ b/lib/Makefile
-@@ -160,6 +160,7 @@ lib-$(CONFIG_DECOMPRESS_LZMA) += decompress_unlzma.o
+@@ -166,6 +166,7 @@ lib-$(CONFIG_DECOMPRESS_LZMA) += decompress_unlzma.o
  lib-$(CONFIG_DECOMPRESS_XZ) += decompress_unxz.o
  lib-$(CONFIG_DECOMPRESS_LZO) += decompress_unlzo.o
  lib-$(CONFIG_DECOMPRESS_LZ4) += decompress_unlz4.o

diff --git a/5003_ZSTD-v4-4-8-add-support-for-zstd-compres-kern.patch b/5003_ZSTD-v5-4-8-add-support-for-zstd-compres-kern.patch
similarity index 91%
rename from 5003_ZSTD-v4-4-8-add-support-for-zstd-compres-kern.patch
rename to 5003_ZSTD-v5-4-8-add-support-for-zstd-compres-kern.patch
index e6598e6..d9dc79e 100644
--- a/5003_ZSTD-v4-4-8-add-support-for-zstd-compres-kern.patch
+++ b/5003_ZSTD-v5-4-8-add-support-for-zstd-compres-kern.patch
@@ -1,8 +1,8 @@
 diff --git a/init/Kconfig b/init/Kconfig
-index 20a6ac33761c..9b646a25918e 100644
+index 492bb7000aa4..806874fdd663 100644
 --- a/init/Kconfig
 +++ b/init/Kconfig
-@@ -173,13 +173,16 @@ config HAVE_KERNEL_LZO
+@@ -176,13 +176,16 @@ config HAVE_KERNEL_LZO
  config HAVE_KERNEL_LZ4
  	bool
  
@@ -20,7 +20,7 @@ index 20a6ac33761c..9b646a25918e 100644
  	help
  	  The linux kernel is a kind of self-extracting executable.
  	  Several compression algorithms are available, which differ
-@@ -258,6 +261,16 @@ config KERNEL_LZ4
+@@ -261,6 +264,16 @@ config KERNEL_LZ4
  	  is about 8% bigger than LZO. But the decompression speed is
  	  faster than LZO.
  
@@ -38,10 +38,10 @@ index 20a6ac33761c..9b646a25918e 100644
  	bool "None"
  	depends on HAVE_KERNEL_UNCOMPRESSED
 diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
-index 752ff0a225a9..4b99893efa3d 100644
+index b12dd5ba4896..efe69b78d455 100644
 --- a/scripts/Makefile.lib
 +++ b/scripts/Makefile.lib
-@@ -394,6 +394,21 @@ quiet_cmd_xzkern = XZKERN  $@
+@@ -405,6 +405,21 @@ quiet_cmd_xzkern = XZKERN  $@
  quiet_cmd_xzmisc = XZMISC  $@
        cmd_xzmisc = cat $(real-prereqs) | xz --check=crc32 --lzma2=dict=1MiB > $@
  

diff --git a/5004_ZSTD-v4-5-8-add-support-for-zstd-compressed-initramfs.patch b/5004_ZSTD-v5-5-8-add-support-for-zstd-compressed-initramfs.patch
similarity index 91%
rename from 5004_ZSTD-v4-5-8-add-support-for-zstd-compressed-initramfs.patch
rename to 5004_ZSTD-v5-5-8-add-support-for-zstd-compressed-initramfs.patch
index 6054414..0096db1 100644
--- a/5004_ZSTD-v4-5-8-add-support-for-zstd-compressed-initramfs.patch
+++ b/5004_ZSTD-v5-5-8-add-support-for-zstd-compressed-initramfs.patch
@@ -1,5 +1,5 @@
 diff --git a/usr/Kconfig b/usr/Kconfig
-index bdf5bbd40727..43aca37d09b5 100644
+index 96afb03b65f9..2599bc21c1b2 100644
 --- a/usr/Kconfig
 +++ b/usr/Kconfig
 @@ -100,6 +100,15 @@ config RD_LZ4
@@ -18,7 +18,7 @@ index bdf5bbd40727..43aca37d09b5 100644
  choice
  	prompt "Built-in initramfs compression mode"
  	depends on INITRAMFS_SOURCE != ""
-@@ -207,4 +216,15 @@ config INITRAMFS_COMPRESSION_LZ4
+@@ -196,6 +205,17 @@ config INITRAMFS_COMPRESSION_LZ4
  	  If you choose this, keep in mind that most distros don't provide lz4
  	  by default which could cause a build failure.
  
@@ -33,7 +33,9 @@ index bdf5bbd40727..43aca37d09b5 100644
 +	  If you choose this, keep in mind that you may need to install the zstd
 +	  tool to be able to compress the initram.
 +
- endchoice
+ config INITRAMFS_COMPRESSION_NONE
+ 	bool "None"
+ 	help
 diff --git a/usr/Makefile b/usr/Makefile
 index c12e6b15ce72..b1a81a40eab1 100644
 --- a/usr/Makefile

diff --git a/5005_ZSTD-v4-6-8-bump-ZO-z-extra-bytes-margin.patch b/5005_ZSTD-v5-6-8-bump-ZO-z-extra-bytes-margin.patch
similarity index 87%
rename from 5005_ZSTD-v4-6-8-bump-ZO-z-extra-bytes-margin.patch
rename to 5005_ZSTD-v5-6-8-bump-ZO-z-extra-bytes-margin.patch
index b4fd239..4e86d56 100644
--- a/5005_ZSTD-v4-6-8-bump-ZO-z-extra-bytes-margin.patch
+++ b/5005_ZSTD-v5-6-8-bump-ZO-z-extra-bytes-margin.patch
@@ -1,8 +1,8 @@
 diff --git a/arch/x86/boot/header.S b/arch/x86/boot/header.S
-index 97d9b6d6c1af..b820875c5c95 100644
+index 735ad7f21ab0..6dbd7e9f74c9 100644
 --- a/arch/x86/boot/header.S
 +++ b/arch/x86/boot/header.S
-@@ -536,8 +536,14 @@ pref_address:		.quad LOAD_PHYSICAL_ADDR	# preferred load addr
+@@ -539,8 +539,14 @@ pref_address:		.quad LOAD_PHYSICAL_ADDR	# preferred load addr
  # the size-dependent part now grows so fast.
  #
  # extra_bytes = (uncompressed_size >> 8) + 65536

diff --git a/5006_ZSTD-v4-7-8-support-for-ZSTD-compressed-kernel.patch b/5006_ZSTD-v5-7-8-support-for-ZSTD-compressed-kernel.patch
similarity index 93%
rename from 5006_ZSTD-v4-7-8-support-for-ZSTD-compressed-kernel.patch
rename to 5006_ZSTD-v5-7-8-support-for-ZSTD-compressed-kernel.patch
index 5fc8a77..6147136 100644
--- a/5006_ZSTD-v4-7-8-support-for-ZSTD-compressed-kernel.patch
+++ b/5006_ZSTD-v5-7-8-support-for-ZSTD-compressed-kernel.patch
@@ -1,8 +1,8 @@
 diff --git a/Documentation/x86/boot.rst b/Documentation/x86/boot.rst
-index c9c201596c3e..cedcf4d49bf0 100644
+index fa7ddc0428c8..0404e99dc1d4 100644
 --- a/Documentation/x86/boot.rst
 +++ b/Documentation/x86/boot.rst
-@@ -786,9 +786,9 @@ Protocol:	2.08+
+@@ -782,9 +782,9 @@ Protocol:	2.08+
    uncompressed data should be determined using the standard magic
    numbers.  The currently supported compression formats are gzip
    (magic numbers 1F 8B or 1F 9E), bzip2 (magic number 42 5A), LZMA
@@ -16,10 +16,10 @@ index c9c201596c3e..cedcf4d49bf0 100644
  ============	==============
  Field name:	payload_length
 diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
-index beea77046f9b..12d88997a3a6 100644
+index 886fa8368256..912f783bc01a 100644
 --- a/arch/x86/Kconfig
 +++ b/arch/x86/Kconfig
-@@ -183,6 +183,7 @@ config X86
+@@ -185,6 +185,7 @@ config X86
  	select HAVE_KERNEL_LZMA
  	select HAVE_KERNEL_LZO
  	select HAVE_KERNEL_XZ
@@ -28,10 +28,10 @@ index beea77046f9b..12d88997a3a6 100644
  	select HAVE_KPROBES_ON_FTRACE
  	select HAVE_FUNCTION_ERROR_INJECTION
 diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
-index 26050ae0b27e..8233f598f15b 100644
+index 7619742f91c9..471e61400a2e 100644
 --- a/arch/x86/boot/compressed/Makefile
 +++ b/arch/x86/boot/compressed/Makefile
-@@ -24,7 +24,7 @@ OBJECT_FILES_NON_STANDARD	:= y
+@@ -26,7 +26,7 @@ OBJECT_FILES_NON_STANDARD	:= y
  KCOV_INSTRUMENT		:= n
  
  targets := vmlinux vmlinux.bin vmlinux.bin.gz vmlinux.bin.bz2 vmlinux.bin.lzma \

diff --git a/5007_ZSTD-v4-8-8-gitignore-add-ZSTD-compressed-files.patch b/5007_ZSTD-v5-8-8-gitignore-add-ZSTD-compressed-files.patch
similarity index 72%
rename from 5007_ZSTD-v4-8-8-gitignore-add-ZSTD-compressed-files.patch
rename to 5007_ZSTD-v5-8-8-gitignore-add-ZSTD-compressed-files.patch
index 7506899..adf8578 100644
--- a/5007_ZSTD-v4-8-8-gitignore-add-ZSTD-compressed-files.patch
+++ b/5007_ZSTD-v5-8-8-gitignore-add-ZSTD-compressed-files.patch
@@ -1,8 +1,8 @@
 diff --git a/.gitignore b/.gitignore
-index 72ef86a5570d..edb0191c294f 100644
+index 2258e906f01c..23871de69072 100644
 --- a/.gitignore
 +++ b/.gitignore
-@@ -43,6 +43,7 @@
+@@ -44,6 +44,7 @@
  *.tab.[ch]
  *.tar
  *.xz


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [gentoo-commits] proj/linux-patches:5.6 commit in: /
@ 2020-04-13 12:21 Mike Pagano
  0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2020-04-13 12:21 UTC (permalink / raw
  To: gentoo-commits

commit:     323c1062fe0439f87dd5f634f23f2e360b8c4b03
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon Apr 13 12:21:08 2020 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon Apr 13 12:21:08 2020 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=323c1062

Linux patch 5.6.4

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README            |    4 +
 1003_linux-5.6.4.patch | 1810 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1814 insertions(+)

diff --git a/0000_README b/0000_README
index 458ce4b..4f1ee49 100644
--- a/0000_README
+++ b/0000_README
@@ -55,6 +55,10 @@ Patch:  1002_linux-5.6.3.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.6.3
 
+Patch:  1003_linux-5.6.4.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.6.4
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1003_linux-5.6.4.patch b/1003_linux-5.6.4.patch
new file mode 100644
index 0000000..9246d83
--- /dev/null
+++ b/1003_linux-5.6.4.patch
@@ -0,0 +1,1810 @@
+diff --git a/Makefile b/Makefile
+index 41aafb394d25..0a7e41471838 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 6
+-SUBLEVEL = 3
++SUBLEVEL = 4
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+ 
+diff --git a/arch/s390/include/asm/lowcore.h b/arch/s390/include/asm/lowcore.h
+index 237ee0c4169f..612ed3c6d581 100644
+--- a/arch/s390/include/asm/lowcore.h
++++ b/arch/s390/include/asm/lowcore.h
+@@ -141,7 +141,9 @@ struct lowcore {
+ 
+ 	/* br %r1 trampoline */
+ 	__u16	br_r1_trampoline;		/* 0x0400 */
+-	__u8	pad_0x0402[0x0e00-0x0402];	/* 0x0402 */
++	__u32	return_lpswe;			/* 0x0402 */
++	__u32	return_mcck_lpswe;		/* 0x0406 */
++	__u8	pad_0x040a[0x0e00-0x040a];	/* 0x040a */
+ 
+ 	/*
+ 	 * 0xe00 contains the address of the IPL Parameter Information
+diff --git a/arch/s390/include/asm/processor.h b/arch/s390/include/asm/processor.h
+index aadb3d0e2adc..8e7fb3954dc1 100644
+--- a/arch/s390/include/asm/processor.h
++++ b/arch/s390/include/asm/processor.h
+@@ -161,6 +161,7 @@ typedef struct thread_struct thread_struct;
+ #define INIT_THREAD {							\
+ 	.ksp = sizeof(init_stack) + (unsigned long) &init_stack,	\
+ 	.fpu.regs = (void *) init_task.thread.fpu.fprs,			\
++	.last_break = 1,						\
+ }
+ 
+ /*
+diff --git a/arch/s390/include/asm/setup.h b/arch/s390/include/asm/setup.h
+index b241ddb67caf..534f212753d6 100644
+--- a/arch/s390/include/asm/setup.h
++++ b/arch/s390/include/asm/setup.h
+@@ -8,6 +8,7 @@
+ 
+ #include <linux/bits.h>
+ #include <uapi/asm/setup.h>
++#include <linux/build_bug.h>
+ 
+ #define EP_OFFSET		0x10008
+ #define EP_STRING		"S390EP"
+@@ -162,6 +163,12 @@ static inline unsigned long kaslr_offset(void)
+ 	return __kaslr_offset;
+ }
+ 
++static inline u32 gen_lpswe(unsigned long addr)
++{
++	BUILD_BUG_ON(addr > 0xfff);
++	return 0xb2b20000 | addr;
++}
++
+ #else /* __ASSEMBLY__ */
+ 
+ #define IPL_DEVICE	(IPL_DEVICE_OFFSET)
+diff --git a/arch/s390/kernel/asm-offsets.c b/arch/s390/kernel/asm-offsets.c
+index ce33406cfe83..e80f0e6f5972 100644
+--- a/arch/s390/kernel/asm-offsets.c
++++ b/arch/s390/kernel/asm-offsets.c
+@@ -124,6 +124,8 @@ int main(void)
+ 	OFFSET(__LC_EXT_DAMAGE_CODE, lowcore, external_damage_code);
+ 	OFFSET(__LC_MCCK_FAIL_STOR_ADDR, lowcore, failing_storage_address);
+ 	OFFSET(__LC_LAST_BREAK, lowcore, breaking_event_addr);
++	OFFSET(__LC_RETURN_LPSWE, lowcore, return_lpswe);
++	OFFSET(__LC_RETURN_MCCK_LPSWE, lowcore, return_mcck_lpswe);
+ 	OFFSET(__LC_RST_OLD_PSW, lowcore, restart_old_psw);
+ 	OFFSET(__LC_EXT_OLD_PSW, lowcore, external_old_psw);
+ 	OFFSET(__LC_SVC_OLD_PSW, lowcore, svc_old_psw);
+diff --git a/arch/s390/kernel/entry.S b/arch/s390/kernel/entry.S
+index 9205add8481d..3ae64914bd14 100644
+--- a/arch/s390/kernel/entry.S
++++ b/arch/s390/kernel/entry.S
+@@ -115,26 +115,29 @@ _LPP_OFFSET	= __LC_LPP
+ 
+ 	.macro	SWITCH_ASYNC savearea,timer
+ 	tmhh	%r8,0x0001		# interrupting from user ?
+-	jnz	1f
++	jnz	2f
+ 	lgr	%r14,%r9
++	cghi	%r14,__LC_RETURN_LPSWE
++	je	0f
+ 	slg	%r14,BASED(.Lcritical_start)
+ 	clg	%r14,BASED(.Lcritical_length)
+-	jhe	0f
++	jhe	1f
++0:
+ 	lghi	%r11,\savearea		# inside critical section, do cleanup
+ 	brasl	%r14,cleanup_critical
+ 	tmhh	%r8,0x0001		# retest problem state after cleanup
+-	jnz	1f
+-0:	lg	%r14,__LC_ASYNC_STACK	# are we already on the target stack?
++	jnz	2f
++1:	lg	%r14,__LC_ASYNC_STACK	# are we already on the target stack?
+ 	slgr	%r14,%r15
+ 	srag	%r14,%r14,STACK_SHIFT
+-	jnz	2f
++	jnz	3f
+ 	CHECK_STACK \savearea
+ 	aghi	%r15,-(STACK_FRAME_OVERHEAD + __PT_SIZE)
+-	j	3f
+-1:	UPDATE_VTIME %r14,%r15,\timer
++	j	4f
++2:	UPDATE_VTIME %r14,%r15,\timer
+ 	BPENTER __TI_flags(%r12),_TIF_ISOLATE_BP
+-2:	lg	%r15,__LC_ASYNC_STACK	# load async stack
+-3:	la	%r11,STACK_FRAME_OVERHEAD(%r15)
++3:	lg	%r15,__LC_ASYNC_STACK	# load async stack
++4:	la	%r11,STACK_FRAME_OVERHEAD(%r15)
+ 	.endm
+ 
+ 	.macro UPDATE_VTIME w1,w2,enter_timer
+@@ -401,7 +404,7 @@ ENTRY(system_call)
+ 	stpt	__LC_EXIT_TIMER
+ 	mvc	__VDSO_ECTG_BASE(16,%r14),__LC_EXIT_TIMER
+ 	lmg	%r11,%r15,__PT_R11(%r11)
+-	lpswe	__LC_RETURN_PSW
++	b	__LC_RETURN_LPSWE(%r0)
+ .Lsysc_done:
+ 
+ #
+@@ -608,43 +611,50 @@ ENTRY(pgm_check_handler)
+ 	BPOFF
+ 	stmg	%r8,%r15,__LC_SAVE_AREA_SYNC
+ 	lg	%r10,__LC_LAST_BREAK
+-	lg	%r12,__LC_CURRENT
++	srag	%r11,%r10,12
++	jnz	0f
++	/* if __LC_LAST_BREAK is < 4096, it contains one of
++	 * the lpswe addresses in lowcore. Set it to 1 (initial state)
++	 * to prevent leaking that address to userspace.
++	 */
++	lghi	%r10,1
++0:	lg	%r12,__LC_CURRENT
+ 	lghi	%r11,0
+ 	larl	%r13,cleanup_critical
+ 	lmg	%r8,%r9,__LC_PGM_OLD_PSW
+ 	tmhh	%r8,0x0001		# test problem state bit
+-	jnz	2f			# -> fault in user space
++	jnz	3f			# -> fault in user space
+ #if IS_ENABLED(CONFIG_KVM)
+ 	# cleanup critical section for program checks in sie64a
+ 	lgr	%r14,%r9
+ 	slg	%r14,BASED(.Lsie_critical_start)
+ 	clg	%r14,BASED(.Lsie_critical_length)
+-	jhe	0f
++	jhe	1f
+ 	lg	%r14,__SF_SIE_CONTROL(%r15)	# get control block pointer
+ 	ni	__SIE_PROG0C+3(%r14),0xfe	# no longer in SIE
+ 	lctlg	%c1,%c1,__LC_USER_ASCE		# load primary asce
+ 	larl	%r9,sie_exit			# skip forward to sie_exit
+ 	lghi	%r11,_PIF_GUEST_FAULT
+ #endif
+-0:	tmhh	%r8,0x4000		# PER bit set in old PSW ?
+-	jnz	1f			# -> enabled, can't be a double fault
++1:	tmhh	%r8,0x4000		# PER bit set in old PSW ?
++	jnz	2f			# -> enabled, can't be a double fault
+ 	tm	__LC_PGM_ILC+3,0x80	# check for per exception
+ 	jnz	.Lpgm_svcper		# -> single stepped svc
+-1:	CHECK_STACK __LC_SAVE_AREA_SYNC
++2:	CHECK_STACK __LC_SAVE_AREA_SYNC
+ 	aghi	%r15,-(STACK_FRAME_OVERHEAD + __PT_SIZE)
+-	# CHECK_VMAP_STACK branches to stack_overflow or 4f
+-	CHECK_VMAP_STACK __LC_SAVE_AREA_SYNC,4f
+-2:	UPDATE_VTIME %r14,%r15,__LC_SYNC_ENTER_TIMER
++	# CHECK_VMAP_STACK branches to stack_overflow or 5f
++	CHECK_VMAP_STACK __LC_SAVE_AREA_SYNC,5f
++3:	UPDATE_VTIME %r14,%r15,__LC_SYNC_ENTER_TIMER
+ 	BPENTER __TI_flags(%r12),_TIF_ISOLATE_BP
+ 	lg	%r15,__LC_KERNEL_STACK
+ 	lgr	%r14,%r12
+ 	aghi	%r14,__TASK_thread	# pointer to thread_struct
+ 	lghi	%r13,__LC_PGM_TDB
+ 	tm	__LC_PGM_ILC+2,0x02	# check for transaction abort
+-	jz	3f
++	jz	4f
+ 	mvc	__THREAD_trap_tdb(256,%r14),0(%r13)
+-3:	stg	%r10,__THREAD_last_break(%r14)
+-4:	lgr	%r13,%r11
++4:	stg	%r10,__THREAD_last_break(%r14)
++5:	lgr	%r13,%r11
+ 	la	%r11,STACK_FRAME_OVERHEAD(%r15)
+ 	stmg	%r0,%r7,__PT_R0(%r11)
+ 	# clear user controlled registers to prevent speculative use
+@@ -663,14 +673,14 @@ ENTRY(pgm_check_handler)
+ 	stg	%r13,__PT_FLAGS(%r11)
+ 	stg	%r10,__PT_ARGS(%r11)
+ 	tm	__LC_PGM_ILC+3,0x80	# check for per exception
+-	jz	5f
++	jz	6f
+ 	tmhh	%r8,0x0001		# kernel per event ?
+ 	jz	.Lpgm_kprobe
+ 	oi	__PT_FLAGS+7(%r11),_PIF_PER_TRAP
+ 	mvc	__THREAD_per_address(8,%r14),__LC_PER_ADDRESS
+ 	mvc	__THREAD_per_cause(2,%r14),__LC_PER_CODE
+ 	mvc	__THREAD_per_paid(1,%r14),__LC_PER_ACCESS_ID
+-5:	REENABLE_IRQS
++6:	REENABLE_IRQS
+ 	xc	__SF_BACKCHAIN(8,%r15),__SF_BACKCHAIN(%r15)
+ 	larl	%r1,pgm_check_table
+ 	llgh	%r10,__PT_INT_CODE+2(%r11)
+@@ -775,7 +785,7 @@ ENTRY(io_int_handler)
+ 	mvc	__VDSO_ECTG_BASE(16,%r14),__LC_EXIT_TIMER
+ .Lio_exit_kernel:
+ 	lmg	%r11,%r15,__PT_R11(%r11)
+-	lpswe	__LC_RETURN_PSW
++	b	__LC_RETURN_LPSWE(%r0)
+ .Lio_done:
+ 
+ #
+@@ -1214,7 +1224,7 @@ ENTRY(mcck_int_handler)
+ 	stpt	__LC_EXIT_TIMER
+ 	mvc	__VDSO_ECTG_BASE(16,%r14),__LC_EXIT_TIMER
+ 0:	lmg	%r11,%r15,__PT_R11(%r11)
+-	lpswe	__LC_RETURN_MCCK_PSW
++	b	__LC_RETURN_MCCK_LPSWE
+ 
+ .Lmcck_panic:
+ 	lg	%r15,__LC_NODAT_STACK
+@@ -1271,6 +1281,8 @@ ENDPROC(stack_overflow)
+ #endif
+ 
+ ENTRY(cleanup_critical)
++	cghi	%r9,__LC_RETURN_LPSWE
++	je	.Lcleanup_lpswe
+ #if IS_ENABLED(CONFIG_KVM)
+ 	clg	%r9,BASED(.Lcleanup_table_sie)	# .Lsie_gmap
+ 	jl	0f
+@@ -1424,6 +1436,7 @@ ENDPROC(cleanup_critical)
+ 	mvc	__LC_RETURN_PSW(16),__PT_PSW(%r9)
+ 	mvc	0(64,%r11),__PT_R8(%r9)
+ 	lmg	%r0,%r7,__PT_R0(%r9)
++.Lcleanup_lpswe:
+ 1:	lmg	%r8,%r9,__LC_RETURN_PSW
+ 	BR_EX	%r14,%r11
+ .Lcleanup_sysc_restore_insn:
+diff --git a/arch/s390/kernel/process.c b/arch/s390/kernel/process.c
+index 6ccef5f29761..eb6e23ad15a2 100644
+--- a/arch/s390/kernel/process.c
++++ b/arch/s390/kernel/process.c
+@@ -106,6 +106,7 @@ int copy_thread_tls(unsigned long clone_flags, unsigned long new_stackp,
+ 	p->thread.system_timer = 0;
+ 	p->thread.hardirq_timer = 0;
+ 	p->thread.softirq_timer = 0;
++	p->thread.last_break = 1;
+ 
+ 	frame->sf.back_chain = 0;
+ 	/* new return point is ret_from_fork */
+diff --git a/arch/s390/kernel/setup.c b/arch/s390/kernel/setup.c
+index b2c2f75860e8..6f8efeaf220d 100644
+--- a/arch/s390/kernel/setup.c
++++ b/arch/s390/kernel/setup.c
+@@ -73,6 +73,7 @@
+ #include <asm/nospec-branch.h>
+ #include <asm/mem_detect.h>
+ #include <asm/uv.h>
++#include <asm/asm-offsets.h>
+ #include "entry.h"
+ 
+ /*
+@@ -450,6 +451,8 @@ static void __init setup_lowcore_dat_off(void)
+ 	lc->spinlock_index = 0;
+ 	arch_spin_lock_setup(0);
+ 	lc->br_r1_trampoline = 0x07f1;	/* br %r1 */
++	lc->return_lpswe = gen_lpswe(__LC_RETURN_PSW);
++	lc->return_mcck_lpswe = gen_lpswe(__LC_RETURN_MCCK_PSW);
+ 
+ 	set_prefix((u32)(unsigned long) lc);
+ 	lowcore_ptr[0] = lc;
+diff --git a/arch/s390/kernel/smp.c b/arch/s390/kernel/smp.c
+index a08bd2522dd9..f87d4e14269c 100644
+--- a/arch/s390/kernel/smp.c
++++ b/arch/s390/kernel/smp.c
+@@ -212,6 +212,8 @@ static int pcpu_alloc_lowcore(struct pcpu *pcpu, int cpu)
+ 	lc->spinlock_lockval = arch_spin_lockval(cpu);
+ 	lc->spinlock_index = 0;
+ 	lc->br_r1_trampoline = 0x07f1;	/* br %r1 */
++	lc->return_lpswe = gen_lpswe(__LC_RETURN_PSW);
++	lc->return_mcck_lpswe = gen_lpswe(__LC_RETURN_MCCK_PSW);
+ 	if (nmi_alloc_per_cpu(lc))
+ 		goto out_async;
+ 	if (vdso_alloc_per_cpu(lc))
+diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c
+index b403fa14847d..f810930aff42 100644
+--- a/arch/s390/mm/vmem.c
++++ b/arch/s390/mm/vmem.c
+@@ -415,6 +415,10 @@ void __init vmem_map_init(void)
+ 		     SET_MEMORY_RO | SET_MEMORY_X);
+ 	__set_memory(__stext_dma, (__etext_dma - __stext_dma) >> PAGE_SHIFT,
+ 		     SET_MEMORY_RO | SET_MEMORY_X);
++
++	/* we need lowcore executable for our LPSWE instructions */
++	set_memory_x(0, 1);
++
+ 	pr_info("Write protected kernel read-only data: %luk\n",
+ 		(unsigned long)(__end_rodata - _stext) >> 10);
+ }
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index d92088dec6c3..d4bd9b961726 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -3023,6 +3023,14 @@ static int blk_mq_alloc_rq_maps(struct blk_mq_tag_set *set)
+ 
+ static int blk_mq_update_queue_map(struct blk_mq_tag_set *set)
+ {
++	/*
++	 * blk_mq_map_queues() and multiple .map_queues() implementations
++	 * expect that set->map[HCTX_TYPE_DEFAULT].nr_queues is set to the
++	 * number of hardware queues.
++	 */
++	if (set->nr_maps == 1)
++		set->map[HCTX_TYPE_DEFAULT].nr_queues = set->nr_hw_queues;
++
+ 	if (set->ops->map_queues && !is_kdump_kernel()) {
+ 		int i;
+ 
+diff --git a/drivers/acpi/sleep.c b/drivers/acpi/sleep.c
+index e5f95922bc21..ce49cbfa941b 100644
+--- a/drivers/acpi/sleep.c
++++ b/drivers/acpi/sleep.c
+@@ -1012,6 +1012,10 @@ static bool acpi_s2idle_wake(void)
+ 		if (acpi_any_fixed_event_status_set())
+ 			return true;
+ 
++		/* Check wakeups from drivers sharing the SCI. */
++		if (acpi_check_wakeup_handlers())
++			return true;
++
+ 		/*
+ 		 * If there are no EC events to process and at least one of the
+ 		 * other enabled GPEs is active, the wakeup is regarded as a
+diff --git a/drivers/acpi/sleep.h b/drivers/acpi/sleep.h
+index 41675d24a9bc..3d90480ce1b1 100644
+--- a/drivers/acpi/sleep.h
++++ b/drivers/acpi/sleep.h
+@@ -2,6 +2,7 @@
+ 
+ extern void acpi_enable_wakeup_devices(u8 sleep_state);
+ extern void acpi_disable_wakeup_devices(u8 sleep_state);
++extern bool acpi_check_wakeup_handlers(void);
+ 
+ extern struct list_head acpi_wakeup_device_list;
+ extern struct mutex acpi_device_lock;
+diff --git a/drivers/acpi/wakeup.c b/drivers/acpi/wakeup.c
+index 9614126bf56e..90c40f992e13 100644
+--- a/drivers/acpi/wakeup.c
++++ b/drivers/acpi/wakeup.c
+@@ -12,6 +12,15 @@
+ #include "internal.h"
+ #include "sleep.h"
+ 
++struct acpi_wakeup_handler {
++	struct list_head list_node;
++	bool (*wakeup)(void *context);
++	void *context;
++};
++
++static LIST_HEAD(acpi_wakeup_handler_head);
++static DEFINE_MUTEX(acpi_wakeup_handler_mutex);
++
+ /*
+  * We didn't lock acpi_device_lock in the file, because it invokes oops in
+  * suspend/resume and isn't really required as this is called in S-state. At
+@@ -96,3 +105,75 @@ int __init acpi_wakeup_device_init(void)
+ 	mutex_unlock(&acpi_device_lock);
+ 	return 0;
+ }
++
++/**
++ * acpi_register_wakeup_handler - Register wakeup handler
++ * @wake_irq: The IRQ through which the device may receive wakeups
++ * @wakeup:   Wakeup-handler to call when the SCI has triggered a wakeup
++ * @context:  Context to pass to the handler when calling it
++ *
++ * Drivers which may share an IRQ with the SCI can use this to register
++ * a handler which returns true when the device they are managing wants
++ * to trigger a wakeup.
++ */
++int acpi_register_wakeup_handler(int wake_irq, bool (*wakeup)(void *context),
++				 void *context)
++{
++	struct acpi_wakeup_handler *handler;
++
++	/*
++	 * If the device is not sharing its IRQ with the SCI, there is no
++	 * need to register the handler.
++	 */
++	if (!acpi_sci_irq_valid() || wake_irq != acpi_sci_irq)
++		return 0;
++
++	handler = kmalloc(sizeof(*handler), GFP_KERNEL);
++	if (!handler)
++		return -ENOMEM;
++
++	handler->wakeup = wakeup;
++	handler->context = context;
++
++	mutex_lock(&acpi_wakeup_handler_mutex);
++	list_add(&handler->list_node, &acpi_wakeup_handler_head);
++	mutex_unlock(&acpi_wakeup_handler_mutex);
++
++	return 0;
++}
++EXPORT_SYMBOL_GPL(acpi_register_wakeup_handler);
++
++/**
++ * acpi_unregister_wakeup_handler - Unregister wakeup handler
++ * @wakeup:   Wakeup-handler passed to acpi_register_wakeup_handler()
++ * @context:  Context passed to acpi_register_wakeup_handler()
++ */
++void acpi_unregister_wakeup_handler(bool (*wakeup)(void *context),
++				    void *context)
++{
++	struct acpi_wakeup_handler *handler;
++
++	mutex_lock(&acpi_wakeup_handler_mutex);
++	list_for_each_entry(handler, &acpi_wakeup_handler_head, list_node) {
++		if (handler->wakeup == wakeup && handler->context == context) {
++			list_del(&handler->list_node);
++			kfree(handler);
++			break;
++		}
++	}
++	mutex_unlock(&acpi_wakeup_handler_mutex);
++}
++EXPORT_SYMBOL_GPL(acpi_unregister_wakeup_handler);
++
++bool acpi_check_wakeup_handlers(void)
++{
++	struct acpi_wakeup_handler *handler;
++
++	/* No need to lock, nothing else is running when we're called. */
++	list_for_each_entry(handler, &acpi_wakeup_handler_head, list_node) {
++		if (handler->wakeup(handler->context))
++			return true;
++	}
++
++	return false;
++}
+diff --git a/drivers/base/core.c b/drivers/base/core.c
+index dbb0f9130f42..d32a3aefff32 100644
+--- a/drivers/base/core.c
++++ b/drivers/base/core.c
+@@ -523,9 +523,13 @@ static void device_link_add_missing_supplier_links(void)
+ 
+ 	mutex_lock(&wfs_lock);
+ 	list_for_each_entry_safe(dev, tmp, &wait_for_suppliers,
+-				 links.needs_suppliers)
+-		if (!fwnode_call_int_op(dev->fwnode, add_links, dev))
++				 links.needs_suppliers) {
++		int ret = fwnode_call_int_op(dev->fwnode, add_links, dev);
++		if (!ret)
+ 			list_del_init(&dev->links.needs_suppliers);
++		else if (ret != -ENODEV)
++			dev->links.need_for_probe = false;
++	}
+ 	mutex_unlock(&wfs_lock);
+ }
+ 
+diff --git a/drivers/char/hw_random/imx-rngc.c b/drivers/char/hw_random/imx-rngc.c
+index 30cf00f8e9a0..0576801944fd 100644
+--- a/drivers/char/hw_random/imx-rngc.c
++++ b/drivers/char/hw_random/imx-rngc.c
+@@ -105,8 +105,10 @@ static int imx_rngc_self_test(struct imx_rngc *rngc)
+ 		return -ETIMEDOUT;
+ 	}
+ 
+-	if (rngc->err_reg != 0)
++	if (rngc->err_reg != 0) {
++		imx_rngc_irq_mask_clear(rngc);
+ 		return -EIO;
++	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/char/random.c b/drivers/char/random.c
+index c7f9584de2c8..a6b77a850ddd 100644
+--- a/drivers/char/random.c
++++ b/drivers/char/random.c
+@@ -2149,11 +2149,11 @@ struct batched_entropy {
+ 
+ /*
+  * Get a random word for internal kernel use only. The quality of the random
+- * number is either as good as RDRAND or as good as /dev/urandom, with the
+- * goal of being quite fast and not depleting entropy. In order to ensure
++ * number is good as /dev/urandom, but there is no backtrack protection, with
++ * the goal of being quite fast and not depleting entropy. In order to ensure
+  * that the randomness provided by this function is okay, the function
+- * wait_for_random_bytes() should be called and return 0 at least once
+- * at any point prior.
++ * wait_for_random_bytes() should be called and return 0 at least once at any
++ * point prior.
+  */
+ static DEFINE_PER_CPU(struct batched_entropy, batched_entropy_u64) = {
+ 	.batch_lock	= __SPIN_LOCK_UNLOCKED(batched_entropy_u64.lock),
+@@ -2166,15 +2166,6 @@ u64 get_random_u64(void)
+ 	struct batched_entropy *batch;
+ 	static void *previous;
+ 
+-#if BITS_PER_LONG == 64
+-	if (arch_get_random_long((unsigned long *)&ret))
+-		return ret;
+-#else
+-	if (arch_get_random_long((unsigned long *)&ret) &&
+-	    arch_get_random_long((unsigned long *)&ret + 1))
+-	    return ret;
+-#endif
+-
+ 	warn_unseeded_randomness(&previous);
+ 
+ 	batch = raw_cpu_ptr(&batched_entropy_u64);
+@@ -2199,9 +2190,6 @@ u32 get_random_u32(void)
+ 	struct batched_entropy *batch;
+ 	static void *previous;
+ 
+-	if (arch_get_random_int(&ret))
+-		return ret;
+-
+ 	warn_unseeded_randomness(&previous);
+ 
+ 	batch = raw_cpu_ptr(&batched_entropy_u32);
+diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
+index 2dec3a02ab9f..ff972cf30712 100644
+--- a/drivers/infiniband/core/cma.c
++++ b/drivers/infiniband/core/cma.c
+@@ -2968,6 +2968,7 @@ static int cma_resolve_iboe_route(struct rdma_id_private *id_priv)
+ err2:
+ 	kfree(route->path_rec);
+ 	route->path_rec = NULL;
++	route->num_paths = 0;
+ err1:
+ 	kfree(work);
+ 	return ret;
+@@ -4790,6 +4791,19 @@ static int __init cma_init(void)
+ {
+ 	int ret;
+ 
++	/*
++	 * There is a rare lock ordering dependency in cma_netdev_callback()
++	 * that only happens when bonding is enabled. Teach lockdep that rtnl
++	 * must never be nested under lock so it can find these without having
++	 * to test with bonding.
++	 */
++	if (IS_ENABLED(CONFIG_LOCKDEP)) {
++		rtnl_lock();
++		mutex_lock(&lock);
++		mutex_unlock(&lock);
++		rtnl_unlock();
++	}
++
+ 	cma_wq = alloc_ordered_workqueue("rdma_cm", WQ_MEM_RECLAIM);
+ 	if (!cma_wq)
+ 		return -ENOMEM;
+diff --git a/drivers/infiniband/core/ucma.c b/drivers/infiniband/core/ucma.c
+index 0274e9b704be..f4f79f1292b9 100644
+--- a/drivers/infiniband/core/ucma.c
++++ b/drivers/infiniband/core/ucma.c
+@@ -91,6 +91,7 @@ struct ucma_context {
+ 
+ 	struct ucma_file	*file;
+ 	struct rdma_cm_id	*cm_id;
++	struct mutex		mutex;
+ 	u64			uid;
+ 
+ 	struct list_head	list;
+@@ -216,6 +217,7 @@ static struct ucma_context *ucma_alloc_ctx(struct ucma_file *file)
+ 	init_completion(&ctx->comp);
+ 	INIT_LIST_HEAD(&ctx->mc_list);
+ 	ctx->file = file;
++	mutex_init(&ctx->mutex);
+ 
+ 	if (xa_alloc(&ctx_table, &ctx->id, ctx, xa_limit_32b, GFP_KERNEL))
+ 		goto error;
+@@ -589,6 +591,7 @@ static int ucma_free_ctx(struct ucma_context *ctx)
+ 	}
+ 
+ 	events_reported = ctx->events_reported;
++	mutex_destroy(&ctx->mutex);
+ 	kfree(ctx);
+ 	return events_reported;
+ }
+@@ -658,7 +661,10 @@ static ssize_t ucma_bind_ip(struct ucma_file *file, const char __user *inbuf,
+ 	if (IS_ERR(ctx))
+ 		return PTR_ERR(ctx);
+ 
++	mutex_lock(&ctx->mutex);
+ 	ret = rdma_bind_addr(ctx->cm_id, (struct sockaddr *) &cmd.addr);
++	mutex_unlock(&ctx->mutex);
++
+ 	ucma_put_ctx(ctx);
+ 	return ret;
+ }
+@@ -681,7 +687,9 @@ static ssize_t ucma_bind(struct ucma_file *file, const char __user *inbuf,
+ 	if (IS_ERR(ctx))
+ 		return PTR_ERR(ctx);
+ 
++	mutex_lock(&ctx->mutex);
+ 	ret = rdma_bind_addr(ctx->cm_id, (struct sockaddr *) &cmd.addr);
++	mutex_unlock(&ctx->mutex);
+ 	ucma_put_ctx(ctx);
+ 	return ret;
+ }
+@@ -705,8 +713,10 @@ static ssize_t ucma_resolve_ip(struct ucma_file *file,
+ 	if (IS_ERR(ctx))
+ 		return PTR_ERR(ctx);
+ 
++	mutex_lock(&ctx->mutex);
+ 	ret = rdma_resolve_addr(ctx->cm_id, (struct sockaddr *) &cmd.src_addr,
+ 				(struct sockaddr *) &cmd.dst_addr, cmd.timeout_ms);
++	mutex_unlock(&ctx->mutex);
+ 	ucma_put_ctx(ctx);
+ 	return ret;
+ }
+@@ -731,8 +741,10 @@ static ssize_t ucma_resolve_addr(struct ucma_file *file,
+ 	if (IS_ERR(ctx))
+ 		return PTR_ERR(ctx);
+ 
++	mutex_lock(&ctx->mutex);
+ 	ret = rdma_resolve_addr(ctx->cm_id, (struct sockaddr *) &cmd.src_addr,
+ 				(struct sockaddr *) &cmd.dst_addr, cmd.timeout_ms);
++	mutex_unlock(&ctx->mutex);
+ 	ucma_put_ctx(ctx);
+ 	return ret;
+ }
+@@ -752,7 +764,9 @@ static ssize_t ucma_resolve_route(struct ucma_file *file,
+ 	if (IS_ERR(ctx))
+ 		return PTR_ERR(ctx);
+ 
++	mutex_lock(&ctx->mutex);
+ 	ret = rdma_resolve_route(ctx->cm_id, cmd.timeout_ms);
++	mutex_unlock(&ctx->mutex);
+ 	ucma_put_ctx(ctx);
+ 	return ret;
+ }
+@@ -841,6 +855,7 @@ static ssize_t ucma_query_route(struct ucma_file *file,
+ 	if (IS_ERR(ctx))
+ 		return PTR_ERR(ctx);
+ 
++	mutex_lock(&ctx->mutex);
+ 	memset(&resp, 0, sizeof resp);
+ 	addr = (struct sockaddr *) &ctx->cm_id->route.addr.src_addr;
+ 	memcpy(&resp.src_addr, addr, addr->sa_family == AF_INET ?
+@@ -864,6 +879,7 @@ static ssize_t ucma_query_route(struct ucma_file *file,
+ 		ucma_copy_iw_route(&resp, &ctx->cm_id->route);
+ 
+ out:
++	mutex_unlock(&ctx->mutex);
+ 	if (copy_to_user(u64_to_user_ptr(cmd.response),
+ 			 &resp, sizeof(resp)))
+ 		ret = -EFAULT;
+@@ -1014,6 +1030,7 @@ static ssize_t ucma_query(struct ucma_file *file,
+ 	if (IS_ERR(ctx))
+ 		return PTR_ERR(ctx);
+ 
++	mutex_lock(&ctx->mutex);
+ 	switch (cmd.option) {
+ 	case RDMA_USER_CM_QUERY_ADDR:
+ 		ret = ucma_query_addr(ctx, response, out_len);
+@@ -1028,6 +1045,7 @@ static ssize_t ucma_query(struct ucma_file *file,
+ 		ret = -ENOSYS;
+ 		break;
+ 	}
++	mutex_unlock(&ctx->mutex);
+ 
+ 	ucma_put_ctx(ctx);
+ 	return ret;
+@@ -1068,7 +1086,9 @@ static ssize_t ucma_connect(struct ucma_file *file, const char __user *inbuf,
+ 		return PTR_ERR(ctx);
+ 
+ 	ucma_copy_conn_param(ctx->cm_id, &conn_param, &cmd.conn_param);
++	mutex_lock(&ctx->mutex);
+ 	ret = rdma_connect(ctx->cm_id, &conn_param);
++	mutex_unlock(&ctx->mutex);
+ 	ucma_put_ctx(ctx);
+ 	return ret;
+ }
+@@ -1089,7 +1109,9 @@ static ssize_t ucma_listen(struct ucma_file *file, const char __user *inbuf,
+ 
+ 	ctx->backlog = cmd.backlog > 0 && cmd.backlog < max_backlog ?
+ 		       cmd.backlog : max_backlog;
++	mutex_lock(&ctx->mutex);
+ 	ret = rdma_listen(ctx->cm_id, ctx->backlog);
++	mutex_unlock(&ctx->mutex);
+ 	ucma_put_ctx(ctx);
+ 	return ret;
+ }
+@@ -1112,13 +1134,17 @@ static ssize_t ucma_accept(struct ucma_file *file, const char __user *inbuf,
+ 	if (cmd.conn_param.valid) {
+ 		ucma_copy_conn_param(ctx->cm_id, &conn_param, &cmd.conn_param);
+ 		mutex_lock(&file->mut);
++		mutex_lock(&ctx->mutex);
+ 		ret = __rdma_accept(ctx->cm_id, &conn_param, NULL);
++		mutex_unlock(&ctx->mutex);
+ 		if (!ret)
+ 			ctx->uid = cmd.uid;
+ 		mutex_unlock(&file->mut);
+-	} else
++	} else {
++		mutex_lock(&ctx->mutex);
+ 		ret = __rdma_accept(ctx->cm_id, NULL, NULL);
+-
++		mutex_unlock(&ctx->mutex);
++	}
+ 	ucma_put_ctx(ctx);
+ 	return ret;
+ }
+@@ -1137,7 +1163,9 @@ static ssize_t ucma_reject(struct ucma_file *file, const char __user *inbuf,
+ 	if (IS_ERR(ctx))
+ 		return PTR_ERR(ctx);
+ 
++	mutex_lock(&ctx->mutex);
+ 	ret = rdma_reject(ctx->cm_id, cmd.private_data, cmd.private_data_len);
++	mutex_unlock(&ctx->mutex);
+ 	ucma_put_ctx(ctx);
+ 	return ret;
+ }
+@@ -1156,7 +1184,9 @@ static ssize_t ucma_disconnect(struct ucma_file *file, const char __user *inbuf,
+ 	if (IS_ERR(ctx))
+ 		return PTR_ERR(ctx);
+ 
++	mutex_lock(&ctx->mutex);
+ 	ret = rdma_disconnect(ctx->cm_id);
++	mutex_unlock(&ctx->mutex);
+ 	ucma_put_ctx(ctx);
+ 	return ret;
+ }
+@@ -1187,7 +1217,9 @@ static ssize_t ucma_init_qp_attr(struct ucma_file *file,
+ 	resp.qp_attr_mask = 0;
+ 	memset(&qp_attr, 0, sizeof qp_attr);
+ 	qp_attr.qp_state = cmd.qp_state;
++	mutex_lock(&ctx->mutex);
+ 	ret = rdma_init_qp_attr(ctx->cm_id, &qp_attr, &resp.qp_attr_mask);
++	mutex_unlock(&ctx->mutex);
+ 	if (ret)
+ 		goto out;
+ 
+@@ -1273,9 +1305,13 @@ static int ucma_set_ib_path(struct ucma_context *ctx,
+ 		struct sa_path_rec opa;
+ 
+ 		sa_convert_path_ib_to_opa(&opa, &sa_path);
++		mutex_lock(&ctx->mutex);
+ 		ret = rdma_set_ib_path(ctx->cm_id, &opa);
++		mutex_unlock(&ctx->mutex);
+ 	} else {
++		mutex_lock(&ctx->mutex);
+ 		ret = rdma_set_ib_path(ctx->cm_id, &sa_path);
++		mutex_unlock(&ctx->mutex);
+ 	}
+ 	if (ret)
+ 		return ret;
+@@ -1308,7 +1344,9 @@ static int ucma_set_option_level(struct ucma_context *ctx, int level,
+ 
+ 	switch (level) {
+ 	case RDMA_OPTION_ID:
++		mutex_lock(&ctx->mutex);
+ 		ret = ucma_set_option_id(ctx, optname, optval, optlen);
++		mutex_unlock(&ctx->mutex);
+ 		break;
+ 	case RDMA_OPTION_IB:
+ 		ret = ucma_set_option_ib(ctx, optname, optval, optlen);
+@@ -1368,8 +1406,10 @@ static ssize_t ucma_notify(struct ucma_file *file, const char __user *inbuf,
+ 	if (IS_ERR(ctx))
+ 		return PTR_ERR(ctx);
+ 
++	mutex_lock(&ctx->mutex);
+ 	if (ctx->cm_id->device)
+ 		ret = rdma_notify(ctx->cm_id, (enum ib_event_type)cmd.event);
++	mutex_unlock(&ctx->mutex);
+ 
+ 	ucma_put_ctx(ctx);
+ 	return ret;
+@@ -1412,8 +1452,10 @@ static ssize_t ucma_process_join(struct ucma_file *file,
+ 	mc->join_state = join_state;
+ 	mc->uid = cmd->uid;
+ 	memcpy(&mc->addr, addr, cmd->addr_size);
++	mutex_lock(&ctx->mutex);
+ 	ret = rdma_join_multicast(ctx->cm_id, (struct sockaddr *)&mc->addr,
+ 				  join_state, mc);
++	mutex_unlock(&ctx->mutex);
+ 	if (ret)
+ 		goto err2;
+ 
+@@ -1513,7 +1555,10 @@ static ssize_t ucma_leave_multicast(struct ucma_file *file,
+ 		goto out;
+ 	}
+ 
++	mutex_lock(&mc->ctx->mutex);
+ 	rdma_leave_multicast(mc->ctx->cm_id, (struct sockaddr *) &mc->addr);
++	mutex_unlock(&mc->ctx->mutex);
++
+ 	mutex_lock(&mc->ctx->file->mut);
+ 	ucma_cleanup_mc_events(mc);
+ 	list_del(&mc->list);
+diff --git a/drivers/infiniband/hw/hfi1/sysfs.c b/drivers/infiniband/hw/hfi1/sysfs.c
+index 90f62c4bddba..074ec71772d2 100644
+--- a/drivers/infiniband/hw/hfi1/sysfs.c
++++ b/drivers/infiniband/hw/hfi1/sysfs.c
+@@ -674,7 +674,11 @@ int hfi1_create_port_files(struct ib_device *ibdev, u8 port_num,
+ 		dd_dev_err(dd,
+ 			   "Skipping sc2vl sysfs info, (err %d) port %u\n",
+ 			   ret, port_num);
+-		goto bail;
++		/*
++		 * Based on the documentation for kobject_init_and_add(), the
++		 * caller should call kobject_put even if this call fails.
++		 */
++		goto bail_sc2vl;
+ 	}
+ 	kobject_uevent(&ppd->sc2vl_kobj, KOBJ_ADD);
+ 
+@@ -684,7 +688,7 @@ int hfi1_create_port_files(struct ib_device *ibdev, u8 port_num,
+ 		dd_dev_err(dd,
+ 			   "Skipping sl2sc sysfs info, (err %d) port %u\n",
+ 			   ret, port_num);
+-		goto bail_sc2vl;
++		goto bail_sl2sc;
+ 	}
+ 	kobject_uevent(&ppd->sl2sc_kobj, KOBJ_ADD);
+ 
+@@ -694,7 +698,7 @@ int hfi1_create_port_files(struct ib_device *ibdev, u8 port_num,
+ 		dd_dev_err(dd,
+ 			   "Skipping vl2mtu sysfs info, (err %d) port %u\n",
+ 			   ret, port_num);
+-		goto bail_sl2sc;
++		goto bail_vl2mtu;
+ 	}
+ 	kobject_uevent(&ppd->vl2mtu_kobj, KOBJ_ADD);
+ 
+@@ -704,7 +708,7 @@ int hfi1_create_port_files(struct ib_device *ibdev, u8 port_num,
+ 		dd_dev_err(dd,
+ 			   "Skipping Congestion Control sysfs info, (err %d) port %u\n",
+ 			   ret, port_num);
+-		goto bail_vl2mtu;
++		goto bail_cc;
+ 	}
+ 
+ 	kobject_uevent(&ppd->pport_cc_kobj, KOBJ_ADD);
+@@ -742,7 +746,6 @@ bail_sl2sc:
+ 	kobject_put(&ppd->sl2sc_kobj);
+ bail_sc2vl:
+ 	kobject_put(&ppd->sc2vl_kobj);
+-bail:
+ 	return ret;
+ }
+ 
+@@ -853,8 +856,13 @@ int hfi1_verbs_register_sysfs(struct hfi1_devdata *dd)
+ 
+ 	return 0;
+ bail:
+-	for (i = 0; i < dd->num_sdma; i++)
+-		kobject_del(&dd->per_sdma[i].kobj);
++	/*
++	 * The function kobject_put() will call kobject_del() if the kobject
++	 * has been added successfully. The sysfs files created under the
++	 * kobject directory will also be removed during the process.
++	 */
++	for (; i >= 0; i--)
++		kobject_put(&dd->per_sdma[i].kobj);
+ 
+ 	return ret;
+ }
+@@ -867,6 +875,10 @@ void hfi1_verbs_unregister_sysfs(struct hfi1_devdata *dd)
+ 	struct hfi1_pportdata *ppd;
+ 	int i;
+ 
++	/* Unwind operations in hfi1_verbs_register_sysfs() */
++	for (i = 0; i < dd->num_sdma; i++)
++		kobject_put(&dd->per_sdma[i].kobj);
++
+ 	for (i = 0; i < dd->num_pports; i++) {
+ 		ppd = &dd->pport[i];
+ 
+diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
+index ffa7c2100edb..1279aeabf651 100644
+--- a/drivers/infiniband/hw/mlx5/main.c
++++ b/drivers/infiniband/hw/mlx5/main.c
+@@ -1192,12 +1192,10 @@ static int mlx5_ib_query_device(struct ib_device *ibdev,
+ 		if (MLX5_CAP_ETH(mdev, tunnel_stateless_gre))
+ 			resp.tunnel_offloads_caps |=
+ 				MLX5_IB_TUNNELED_OFFLOADS_GRE;
+-		if (MLX5_CAP_GEN(mdev, flex_parser_protocols) &
+-		    MLX5_FLEX_PROTO_CW_MPLS_GRE)
++		if (MLX5_CAP_ETH(mdev, tunnel_stateless_mpls_over_gre))
+ 			resp.tunnel_offloads_caps |=
+ 				MLX5_IB_TUNNELED_OFFLOADS_MPLS_GRE;
+-		if (MLX5_CAP_GEN(mdev, flex_parser_protocols) &
+-		    MLX5_FLEX_PROTO_CW_MPLS_UDP)
++		if (MLX5_CAP_ETH(mdev, tunnel_stateless_mpls_over_udp))
+ 			resp.tunnel_offloads_caps |=
+ 				MLX5_IB_TUNNELED_OFFLOADS_MPLS_UDP;
+ 	}
+diff --git a/drivers/infiniband/sw/siw/siw_cm.c b/drivers/infiniband/sw/siw/siw_cm.c
+index c5651a96b196..559e5fd3bad8 100644
+--- a/drivers/infiniband/sw/siw/siw_cm.c
++++ b/drivers/infiniband/sw/siw/siw_cm.c
+@@ -1769,14 +1769,23 @@ int siw_reject(struct iw_cm_id *id, const void *pdata, u8 pd_len)
+ 	return 0;
+ }
+ 
+-static int siw_listen_address(struct iw_cm_id *id, int backlog,
+-			      struct sockaddr *laddr, int addr_family)
++/*
++ * siw_create_listen - Create resources for a listener's IWCM ID @id
++ *
++ * Starts listen on the socket address id->local_addr.
++ *
++ */
++int siw_create_listen(struct iw_cm_id *id, int backlog)
+ {
+ 	struct socket *s;
+ 	struct siw_cep *cep = NULL;
+ 	struct siw_device *sdev = to_siw_dev(id->device);
++	int addr_family = id->local_addr.ss_family;
+ 	int rv = 0, s_val;
+ 
++	if (addr_family != AF_INET && addr_family != AF_INET6)
++		return -EAFNOSUPPORT;
++
+ 	rv = sock_create(addr_family, SOCK_STREAM, IPPROTO_TCP, &s);
+ 	if (rv < 0)
+ 		return rv;
+@@ -1791,9 +1800,25 @@ static int siw_listen_address(struct iw_cm_id *id, int backlog,
+ 		siw_dbg(id->device, "setsockopt error: %d\n", rv);
+ 		goto error;
+ 	}
+-	rv = s->ops->bind(s, laddr, addr_family == AF_INET ?
+-				    sizeof(struct sockaddr_in) :
+-				    sizeof(struct sockaddr_in6));
++	if (addr_family == AF_INET) {
++		struct sockaddr_in *laddr = &to_sockaddr_in(id->local_addr);
++
++		/* For wildcard addr, limit binding to current device only */
++		if (ipv4_is_zeronet(laddr->sin_addr.s_addr))
++			s->sk->sk_bound_dev_if = sdev->netdev->ifindex;
++
++		rv = s->ops->bind(s, (struct sockaddr *)laddr,
++				  sizeof(struct sockaddr_in));
++	} else {
++		struct sockaddr_in6 *laddr = &to_sockaddr_in6(id->local_addr);
++
++		/* For wildcard addr, limit binding to current device only */
++		if (ipv6_addr_any(&laddr->sin6_addr))
++			s->sk->sk_bound_dev_if = sdev->netdev->ifindex;
++
++		rv = s->ops->bind(s, (struct sockaddr *)laddr,
++				  sizeof(struct sockaddr_in6));
++	}
+ 	if (rv) {
+ 		siw_dbg(id->device, "socket bind error: %d\n", rv);
+ 		goto error;
+@@ -1852,7 +1877,7 @@ static int siw_listen_address(struct iw_cm_id *id, int backlog,
+ 	list_add_tail(&cep->listenq, (struct list_head *)id->provider_data);
+ 	cep->state = SIW_EPSTATE_LISTENING;
+ 
+-	siw_dbg(id->device, "Listen at laddr %pISp\n", laddr);
++	siw_dbg(id->device, "Listen at laddr %pISp\n", &id->local_addr);
+ 
+ 	return 0;
+ 
+@@ -1910,106 +1935,6 @@ static void siw_drop_listeners(struct iw_cm_id *id)
+ 	}
+ }
+ 
+-/*
+- * siw_create_listen - Create resources for a listener's IWCM ID @id
+- *
+- * Listens on the socket address id->local_addr.
+- *
+- * If the listener's @id provides a specific local IP address, at most one
+- * listening socket is created and associated with @id.
+- *
+- * If the listener's @id provides the wildcard (zero) local IP address,
+- * a separate listen is performed for each local IP address of the device
+- * by creating a listening socket and binding to that local IP address.
+- *
+- */
+-int siw_create_listen(struct iw_cm_id *id, int backlog)
+-{
+-	struct net_device *dev = to_siw_dev(id->device)->netdev;
+-	int rv = 0, listeners = 0;
+-
+-	siw_dbg(id->device, "backlog %d\n", backlog);
+-
+-	/*
+-	 * For each attached address of the interface, create a
+-	 * listening socket, if id->local_addr is the wildcard
+-	 * IP address or matches the IP address.
+-	 */
+-	if (id->local_addr.ss_family == AF_INET) {
+-		struct in_device *in_dev = in_dev_get(dev);
+-		struct sockaddr_in s_laddr;
+-		const struct in_ifaddr *ifa;
+-
+-		if (!in_dev) {
+-			rv = -ENODEV;
+-			goto out;
+-		}
+-		memcpy(&s_laddr, &id->local_addr, sizeof(s_laddr));
+-
+-		siw_dbg(id->device, "laddr %pISp\n", &s_laddr);
+-
+-		rtnl_lock();
+-		in_dev_for_each_ifa_rtnl(ifa, in_dev) {
+-			if (ipv4_is_zeronet(s_laddr.sin_addr.s_addr) ||
+-			    s_laddr.sin_addr.s_addr == ifa->ifa_address) {
+-				s_laddr.sin_addr.s_addr = ifa->ifa_address;
+-
+-				rv = siw_listen_address(id, backlog,
+-						(struct sockaddr *)&s_laddr,
+-						AF_INET);
+-				if (!rv)
+-					listeners++;
+-			}
+-		}
+-		rtnl_unlock();
+-		in_dev_put(in_dev);
+-	} else if (id->local_addr.ss_family == AF_INET6) {
+-		struct inet6_dev *in6_dev = in6_dev_get(dev);
+-		struct inet6_ifaddr *ifp;
+-		struct sockaddr_in6 *s_laddr = &to_sockaddr_in6(id->local_addr);
+-
+-		if (!in6_dev) {
+-			rv = -ENODEV;
+-			goto out;
+-		}
+-		siw_dbg(id->device, "laddr %pISp\n", &s_laddr);
+-
+-		rtnl_lock();
+-		list_for_each_entry(ifp, &in6_dev->addr_list, if_list) {
+-			if (ifp->flags & (IFA_F_TENTATIVE | IFA_F_DEPRECATED))
+-				continue;
+-			if (ipv6_addr_any(&s_laddr->sin6_addr) ||
+-			    ipv6_addr_equal(&s_laddr->sin6_addr, &ifp->addr)) {
+-				struct sockaddr_in6 bind_addr  = {
+-					.sin6_family = AF_INET6,
+-					.sin6_port = s_laddr->sin6_port,
+-					.sin6_flowinfo = 0,
+-					.sin6_addr = ifp->addr,
+-					.sin6_scope_id = dev->ifindex };
+-
+-				rv = siw_listen_address(id, backlog,
+-						(struct sockaddr *)&bind_addr,
+-						AF_INET6);
+-				if (!rv)
+-					listeners++;
+-			}
+-		}
+-		rtnl_unlock();
+-		in6_dev_put(in6_dev);
+-	} else {
+-		rv = -EAFNOSUPPORT;
+-	}
+-out:
+-	if (listeners)
+-		rv = 0;
+-	else if (!rv)
+-		rv = -EINVAL;
+-
+-	siw_dbg(id->device, "%s\n", rv ? "FAIL" : "OK");
+-
+-	return rv;
+-}
+-
+ int siw_destroy_listen(struct iw_cm_id *id)
+ {
+ 	if (!id->provider_data) {
+diff --git a/drivers/mtd/ubi/fastmap-wl.c b/drivers/mtd/ubi/fastmap-wl.c
+index 426820ab9afe..b486250923c5 100644
+--- a/drivers/mtd/ubi/fastmap-wl.c
++++ b/drivers/mtd/ubi/fastmap-wl.c
+@@ -39,6 +39,13 @@ static struct ubi_wl_entry *find_anchor_wl_entry(struct rb_root *root)
+ 	return victim;
+ }
+ 
++static inline void return_unused_peb(struct ubi_device *ubi,
++				     struct ubi_wl_entry *e)
++{
++	wl_tree_add(e, &ubi->free);
++	ubi->free_count++;
++}
++
+ /**
+  * return_unused_pool_pebs - returns unused PEB to the free tree.
+  * @ubi: UBI device description object
+@@ -52,8 +59,7 @@ static void return_unused_pool_pebs(struct ubi_device *ubi,
+ 
+ 	for (i = pool->used; i < pool->size; i++) {
+ 		e = ubi->lookuptbl[pool->pebs[i]];
+-		wl_tree_add(e, &ubi->free);
+-		ubi->free_count++;
++		return_unused_peb(ubi, e);
+ 	}
+ }
+ 
+@@ -361,6 +367,11 @@ static void ubi_fastmap_close(struct ubi_device *ubi)
+ 	return_unused_pool_pebs(ubi, &ubi->fm_pool);
+ 	return_unused_pool_pebs(ubi, &ubi->fm_wl_pool);
+ 
++	if (ubi->fm_anchor) {
++		return_unused_peb(ubi, ubi->fm_anchor);
++		ubi->fm_anchor = NULL;
++	}
++
+ 	if (ubi->fm) {
+ 		for (i = 0; i < ubi->fm->used_blocks; i++)
+ 			kfree(ubi->fm->e[i]);
+diff --git a/drivers/net/can/slcan.c b/drivers/net/can/slcan.c
+index a3664281a33f..4dfa459ef5c7 100644
+--- a/drivers/net/can/slcan.c
++++ b/drivers/net/can/slcan.c
+@@ -148,7 +148,7 @@ static void slc_bump(struct slcan *sl)
+ 	u32 tmpid;
+ 	char *cmd = sl->rbuff;
+ 
+-	cf.can_id = 0;
++	memset(&cf, 0, sizeof(cf));
+ 
+ 	switch (*cmd) {
+ 	case 'r':
+@@ -187,8 +187,6 @@ static void slc_bump(struct slcan *sl)
+ 	else
+ 		return;
+ 
+-	*(u64 *) (&cf.data) = 0; /* clear payload */
+-
+ 	/* RTR frames may have a dlc > 0 but they never have any data bytes */
+ 	if (!(cf.can_id & CAN_RTR_FLAG)) {
+ 		for (i = 0; i < cf.can_dlc; i++) {
+diff --git a/drivers/net/dsa/bcm_sf2.c b/drivers/net/dsa/bcm_sf2.c
+index b0f5280a83cb..e93c81c4062e 100644
+--- a/drivers/net/dsa/bcm_sf2.c
++++ b/drivers/net/dsa/bcm_sf2.c
+@@ -472,7 +472,7 @@ static int bcm_sf2_mdio_register(struct dsa_switch *ds)
+ 	priv->slave_mii_bus->parent = ds->dev->parent;
+ 	priv->slave_mii_bus->phy_mask = ~priv->indir_phy_mask;
+ 
+-	err = of_mdiobus_register(priv->slave_mii_bus, dn);
++	err = mdiobus_register(priv->slave_mii_bus);
+ 	if (err && dn)
+ 		of_node_put(dn);
+ 
+@@ -1069,6 +1069,7 @@ static int bcm_sf2_sw_probe(struct platform_device *pdev)
+ 	const struct bcm_sf2_of_data *data;
+ 	struct b53_platform_data *pdata;
+ 	struct dsa_switch_ops *ops;
++	struct device_node *ports;
+ 	struct bcm_sf2_priv *priv;
+ 	struct b53_device *dev;
+ 	struct dsa_switch *ds;
+@@ -1136,7 +1137,11 @@ static int bcm_sf2_sw_probe(struct platform_device *pdev)
+ 	set_bit(0, priv->cfp.used);
+ 	set_bit(0, priv->cfp.unique);
+ 
+-	bcm_sf2_identify_ports(priv, dn->child);
++	ports = of_find_node_by_name(dn, "ports");
++	if (ports) {
++		bcm_sf2_identify_ports(priv, ports);
++		of_node_put(ports);
++	}
+ 
+ 	priv->irq0 = irq_of_parse_and_map(dn, 0);
+ 	priv->irq1 = irq_of_parse_and_map(dn, 1);
+diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c
+index 7cbd1bd4c5a6..9b0de2852c69 100644
+--- a/drivers/net/dsa/mt7530.c
++++ b/drivers/net/dsa/mt7530.c
+@@ -1356,6 +1356,9 @@ mt7530_setup(struct dsa_switch *ds)
+ 				continue;
+ 
+ 			phy_node = of_parse_phandle(mac_np, "phy-handle", 0);
++			if (!phy_node)
++				continue;
++
+ 			if (phy_node->parent == priv->dev->of_node->parent) {
+ 				ret = of_get_phy_mode(mac_np, &interface);
+ 				if (ret && ret != -ENODEV)
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+index 97f90edbc068..b0bdf7233f0c 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+@@ -3138,7 +3138,6 @@ static int cxgb_set_mac_addr(struct net_device *dev, void *p)
+ 		return ret;
+ 
+ 	memcpy(dev->dev_addr, addr->sa_data, dev->addr_len);
+-	pi->xact_addr_filt = ret;
+ 	return 0;
+ }
+ 
+@@ -6682,6 +6681,10 @@ static void shutdown_one(struct pci_dev *pdev)
+ 			if (adapter->port[i]->reg_state == NETREG_REGISTERED)
+ 				cxgb_close(adapter->port[i]);
+ 
++		rtnl_lock();
++		cxgb4_mqprio_stop_offload(adapter);
++		rtnl_unlock();
++
+ 		if (is_uld(adapter)) {
+ 			detach_ulds(adapter);
+ 			t4_uld_clean_up(adapter);
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_mqprio.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_mqprio.c
+index ec3eb45ee3b4..e6af4906d674 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_mqprio.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_mqprio.c
+@@ -301,6 +301,7 @@ static void cxgb4_mqprio_free_hw_resources(struct net_device *dev)
+ 			cxgb4_clear_msix_aff(eorxq->msix->vec,
+ 					     eorxq->msix->aff_mask);
+ 			free_irq(eorxq->msix->vec, &eorxq->rspq);
++			cxgb4_free_msix_idx_in_bmap(adap, eorxq->msix->idx);
+ 		}
+ 
+ 		free_rspq_fl(adap, &eorxq->rspq, &eorxq->fl);
+@@ -611,6 +612,28 @@ out:
+ 	return ret;
+ }
+ 
++void cxgb4_mqprio_stop_offload(struct adapter *adap)
++{
++	struct cxgb4_tc_port_mqprio *tc_port_mqprio;
++	struct net_device *dev;
++	u8 i;
++
++	if (!adap->tc_mqprio || !adap->tc_mqprio->port_mqprio)
++		return;
++
++	for_each_port(adap, i) {
++		dev = adap->port[i];
++		if (!dev)
++			continue;
++
++		tc_port_mqprio = &adap->tc_mqprio->port_mqprio[i];
++		if (!tc_port_mqprio->mqprio.qopt.num_tc)
++			continue;
++
++		cxgb4_mqprio_disable_offload(dev);
++	}
++}
++
+ int cxgb4_init_tc_mqprio(struct adapter *adap)
+ {
+ 	struct cxgb4_tc_port_mqprio *tc_port_mqprio, *port_mqprio;
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_mqprio.h b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_mqprio.h
+index c532f1ef8451..ff8794132b22 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_mqprio.h
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_tc_mqprio.h
+@@ -38,6 +38,7 @@ struct cxgb4_tc_mqprio {
+ 
+ int cxgb4_setup_tc_mqprio(struct net_device *dev,
+ 			  struct tc_mqprio_qopt_offload *mqprio);
++void cxgb4_mqprio_stop_offload(struct adapter *adap);
+ int cxgb4_init_tc_mqprio(struct adapter *adap);
+ void cxgb4_cleanup_tc_mqprio(struct adapter *adap);
+ #endif /* __CXGB4_TC_MQPRIO_H__ */
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c
+index b607919c8ad0..498de6ef6870 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_flower.c
+@@ -123,9 +123,12 @@ static int mlxsw_sp_flower_parse_actions(struct mlxsw_sp *mlxsw_sp,
+ 			u8 prio = act->vlan.prio;
+ 			u16 vid = act->vlan.vid;
+ 
+-			return mlxsw_sp_acl_rulei_act_vlan(mlxsw_sp, rulei,
+-							   act->id, vid,
+-							   proto, prio, extack);
++			err = mlxsw_sp_acl_rulei_act_vlan(mlxsw_sp, rulei,
++							  act->id, vid,
++							  proto, prio, extack);
++			if (err)
++				return err;
++			break;
+ 			}
+ 		default:
+ 			NL_SET_ERR_MSG_MOD(extack, "Unsupported action");
+diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c
+index 791d99b9e1cf..6b633e9d76da 100644
+--- a/drivers/net/ethernet/realtek/r8169_main.c
++++ b/drivers/net/ethernet/realtek/r8169_main.c
+@@ -5549,12 +5549,10 @@ static int rtl_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 
+ 	netif_napi_add(dev, &tp->napi, rtl8169_poll, NAPI_POLL_WEIGHT);
+ 
+-	dev->features |= NETIF_F_SG | NETIF_F_IP_CSUM | NETIF_F_TSO |
+-		NETIF_F_RXCSUM | NETIF_F_HW_VLAN_CTAG_TX |
+-		NETIF_F_HW_VLAN_CTAG_RX;
+-	dev->hw_features = NETIF_F_SG | NETIF_F_IP_CSUM | NETIF_F_TSO |
+-		NETIF_F_RXCSUM | NETIF_F_HW_VLAN_CTAG_TX |
+-		NETIF_F_HW_VLAN_CTAG_RX;
++	dev->features |= NETIF_F_IP_CSUM | NETIF_F_RXCSUM |
++			 NETIF_F_HW_VLAN_CTAG_TX | NETIF_F_HW_VLAN_CTAG_RX;
++	dev->hw_features = NETIF_F_IP_CSUM | NETIF_F_RXCSUM |
++			   NETIF_F_HW_VLAN_CTAG_TX | NETIF_F_HW_VLAN_CTAG_RX;
+ 	dev->vlan_features = NETIF_F_SG | NETIF_F_IP_CSUM | NETIF_F_TSO |
+ 		NETIF_F_HIGHDMA;
+ 	dev->priv_flags |= IFF_LIVE_ADDR_CHANGE;
+@@ -5572,25 +5570,25 @@ static int rtl_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 		dev->hw_features &= ~NETIF_F_HW_VLAN_CTAG_RX;
+ 
+ 	if (rtl_chip_supports_csum_v2(tp)) {
+-		dev->hw_features |= NETIF_F_IPV6_CSUM | NETIF_F_TSO6;
+-		dev->features |= NETIF_F_IPV6_CSUM | NETIF_F_TSO6;
++		dev->hw_features |= NETIF_F_IPV6_CSUM;
++		dev->features |= NETIF_F_IPV6_CSUM;
++	}
++
++	/* There has been a number of reports that using SG/TSO results in
++	 * tx timeouts. However for a lot of people SG/TSO works fine.
++	 * Therefore disable both features by default, but allow users to
++	 * enable them. Use at own risk!
++	 */
++	if (rtl_chip_supports_csum_v2(tp)) {
++		dev->hw_features |= NETIF_F_SG | NETIF_F_TSO | NETIF_F_TSO6;
+ 		dev->gso_max_size = RTL_GSO_MAX_SIZE_V2;
+ 		dev->gso_max_segs = RTL_GSO_MAX_SEGS_V2;
+ 	} else {
++		dev->hw_features |= NETIF_F_SG | NETIF_F_TSO;
+ 		dev->gso_max_size = RTL_GSO_MAX_SIZE_V1;
+ 		dev->gso_max_segs = RTL_GSO_MAX_SEGS_V1;
+ 	}
+ 
+-	/* RTL8168e-vl and one RTL8168c variant are known to have a
+-	 * HW issue with TSO.
+-	 */
+-	if (tp->mac_version == RTL_GIGA_MAC_VER_34 ||
+-	    tp->mac_version == RTL_GIGA_MAC_VER_22) {
+-		dev->vlan_features &= ~(NETIF_F_ALL_TSO | NETIF_F_SG);
+-		dev->hw_features &= ~(NETIF_F_ALL_TSO | NETIF_F_SG);
+-		dev->features &= ~(NETIF_F_ALL_TSO | NETIF_F_SG);
+-	}
+-
+ 	dev->hw_features |= NETIF_F_RXALL;
+ 	dev->hw_features |= NETIF_F_RXFCS;
+ 
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c b/drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c
+index 542784300620..efc6ec1b8027 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c
+@@ -207,7 +207,7 @@ static void dwmac1000_set_filter(struct mac_device_info *hw,
+ 			reg++;
+ 		}
+ 
+-		while (reg <= perfect_addr_number) {
++		while (reg < perfect_addr_number) {
+ 			writel(0, ioaddr + GMAC_ADDR_HIGH(reg));
+ 			writel(0, ioaddr + GMAC_ADDR_LOW(reg));
+ 			reg++;
+diff --git a/drivers/net/phy/at803x.c b/drivers/net/phy/at803x.c
+index 481cf48c9b9e..31f731e6df72 100644
+--- a/drivers/net/phy/at803x.c
++++ b/drivers/net/phy/at803x.c
+@@ -425,8 +425,8 @@ static int at803x_parse_dt(struct phy_device *phydev)
+ 		 */
+ 		if (at803x_match_phy_id(phydev, ATH8030_PHY_ID) ||
+ 		    at803x_match_phy_id(phydev, ATH8035_PHY_ID)) {
+-			priv->clk_25m_reg &= ~AT8035_CLK_OUT_MASK;
+-			priv->clk_25m_mask &= ~AT8035_CLK_OUT_MASK;
++			priv->clk_25m_reg &= AT8035_CLK_OUT_MASK;
++			priv->clk_25m_mask &= AT8035_CLK_OUT_MASK;
+ 		}
+ 	}
+ 
+diff --git a/drivers/net/phy/micrel.c b/drivers/net/phy/micrel.c
+index 63dedec0433d..51b64f087717 100644
+--- a/drivers/net/phy/micrel.c
++++ b/drivers/net/phy/micrel.c
+@@ -25,6 +25,7 @@
+ #include <linux/micrel_phy.h>
+ #include <linux/of.h>
+ #include <linux/clk.h>
++#include <linux/delay.h>
+ 
+ /* Operation Mode Strap Override */
+ #define MII_KSZPHY_OMSO				0x16
+@@ -902,6 +903,12 @@ static int kszphy_resume(struct phy_device *phydev)
+ 
+ 	genphy_resume(phydev);
+ 
++	/* After switching from power-down to normal mode, an internal global
++	 * reset is automatically generated. Wait a minimum of 1 ms before
++	 * read/write access to the PHY registers.
++	 */
++	usleep_range(1000, 2000);
++
+ 	ret = kszphy_config_reset(phydev);
+ 	if (ret)
+ 		return ret;
+diff --git a/drivers/net/tun.c b/drivers/net/tun.c
+index 650c937ed56b..9de9b7d8aedd 100644
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -1715,8 +1715,12 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun,
+ 			alloc_frag->offset += buflen;
+ 		}
+ 		err = tun_xdp_act(tun, xdp_prog, &xdp, act);
+-		if (err < 0)
+-			goto err_xdp;
++		if (err < 0) {
++			if (act == XDP_REDIRECT || act == XDP_TX)
++				put_page(alloc_frag->page);
++			goto out;
++		}
++
+ 		if (err == XDP_REDIRECT)
+ 			xdp_do_flush();
+ 		if (err != XDP_PASS)
+@@ -1730,8 +1734,6 @@ static struct sk_buff *tun_build_skb(struct tun_struct *tun,
+ 
+ 	return __tun_build_skb(tfile, alloc_frag, buf, buflen, len, pad);
+ 
+-err_xdp:
+-	put_page(alloc_frag->page);
+ out:
+ 	rcu_read_unlock();
+ 	local_bh_enable();
+diff --git a/drivers/platform/x86/intel_int0002_vgpio.c b/drivers/platform/x86/intel_int0002_vgpio.c
+index f14e2c5f9da5..55f088f535e2 100644
+--- a/drivers/platform/x86/intel_int0002_vgpio.c
++++ b/drivers/platform/x86/intel_int0002_vgpio.c
+@@ -127,6 +127,14 @@ static irqreturn_t int0002_irq(int irq, void *data)
+ 	return IRQ_HANDLED;
+ }
+ 
++static bool int0002_check_wake(void *data)
++{
++	u32 gpe_sts_reg;
++
++	gpe_sts_reg = inl(GPE0A_STS_PORT);
++	return (gpe_sts_reg & GPE0A_PME_B0_STS_BIT);
++}
++
+ static struct irq_chip int0002_byt_irqchip = {
+ 	.name			= DRV_NAME,
+ 	.irq_ack		= int0002_irq_ack,
+@@ -220,6 +228,7 @@ static int int0002_probe(struct platform_device *pdev)
+ 		return ret;
+ 	}
+ 
++	acpi_register_wakeup_handler(irq, int0002_check_wake, NULL);
+ 	device_init_wakeup(dev, true);
+ 	return 0;
+ }
+@@ -227,6 +236,7 @@ static int int0002_probe(struct platform_device *pdev)
+ static int int0002_remove(struct platform_device *pdev)
+ {
+ 	device_init_wakeup(&pdev->dev, false);
++	acpi_unregister_wakeup_handler(int0002_check_wake, NULL);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 1e00bf2d65a2..a83aeccafae3 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -1521,7 +1521,7 @@ static void dwc3_gadget_ep_skip_trbs(struct dwc3_ep *dep, struct dwc3_request *r
+ 	for (i = 0; i < req->num_trbs; i++) {
+ 		struct dwc3_trb *trb;
+ 
+-		trb = req->trb + i;
++		trb = &dep->trb_pool[dep->trb_dequeue];
+ 		trb->ctrl &= ~DWC3_TRB_CTRL_HWO;
+ 		dwc3_ep_inc_deq(dep);
+ 	}
+diff --git a/drivers/video/fbdev/core/fbcon.c b/drivers/video/fbdev/core/fbcon.c
+index bb6ae995c2e5..5eb3fc90f9f6 100644
+--- a/drivers/video/fbdev/core/fbcon.c
++++ b/drivers/video/fbdev/core/fbcon.c
+@@ -1283,6 +1283,9 @@ finished:
+ 	if (!con_is_bound(&fb_con))
+ 		fbcon_exit();
+ 
++	if (vc->vc_num == logo_shown)
++		logo_shown = FBCON_LOGO_CANSHOW;
++
+ 	return;
+ }
+ 
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 3affd96a98ba..bdcffd78fbb9 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -5607,7 +5607,7 @@ static void io_ring_file_put(struct io_ring_ctx *ctx, struct file *file)
+ struct io_file_put {
+ 	struct llist_node llist;
+ 	struct file *file;
+-	struct completion *done;
++	bool free_pfile;
+ };
+ 
+ static void io_ring_file_ref_flush(struct fixed_file_data *data)
+@@ -5618,9 +5618,7 @@ static void io_ring_file_ref_flush(struct fixed_file_data *data)
+ 	while ((node = llist_del_all(&data->put_llist)) != NULL) {
+ 		llist_for_each_entry_safe(pfile, tmp, node, llist) {
+ 			io_ring_file_put(data->ctx, pfile->file);
+-			if (pfile->done)
+-				complete(pfile->done);
+-			else
++			if (pfile->free_pfile)
+ 				kfree(pfile);
+ 		}
+ 	}
+@@ -5820,7 +5818,6 @@ static bool io_queue_file_removal(struct fixed_file_data *data,
+ 				  struct file *file)
+ {
+ 	struct io_file_put *pfile, pfile_stack;
+-	DECLARE_COMPLETION_ONSTACK(done);
+ 
+ 	/*
+ 	 * If we fail allocating the struct we need for doing async reomval
+@@ -5829,15 +5826,15 @@ static bool io_queue_file_removal(struct fixed_file_data *data,
+ 	pfile = kzalloc(sizeof(*pfile), GFP_KERNEL);
+ 	if (!pfile) {
+ 		pfile = &pfile_stack;
+-		pfile->done = &done;
+-	}
++		pfile->free_pfile = false;
++	} else
++		pfile->free_pfile = true;
+ 
+ 	pfile->file = file;
+ 	llist_add(&pfile->llist, &data->put_llist);
+ 
+ 	if (pfile == &pfile_stack) {
+ 		percpu_ref_switch_to_atomic(&data->refs, io_atomic_switch);
+-		wait_for_completion(&done);
+ 		flush_work(&data->ref_work);
+ 		return false;
+ 	}
+diff --git a/include/linux/acpi.h b/include/linux/acpi.h
+index 0f24d701fbdc..efac0f9c01a2 100644
+--- a/include/linux/acpi.h
++++ b/include/linux/acpi.h
+@@ -488,6 +488,11 @@ void __init acpi_nvs_nosave_s3(void);
+ void __init acpi_sleep_no_blacklist(void);
+ #endif /* CONFIG_PM_SLEEP */
+ 
++int acpi_register_wakeup_handler(
++	int wake_irq, bool (*wakeup)(void *context), void *context);
++void acpi_unregister_wakeup_handler(
++	bool (*wakeup)(void *context), void *context);
++
+ struct acpi_osc_context {
+ 	char *uuid_str;			/* UUID string */
+ 	int rev;
+diff --git a/include/linux/mlx5/mlx5_ifc.h b/include/linux/mlx5/mlx5_ifc.h
+index bfdf41537cf1..e5a3e26cad01 100644
+--- a/include/linux/mlx5/mlx5_ifc.h
++++ b/include/linux/mlx5/mlx5_ifc.h
+@@ -875,7 +875,11 @@ struct mlx5_ifc_per_protocol_networking_offload_caps_bits {
+ 	u8         swp_csum[0x1];
+ 	u8         swp_lso[0x1];
+ 	u8         cqe_checksum_full[0x1];
+-	u8         reserved_at_24[0x5];
++	u8         tunnel_stateless_geneve_tx[0x1];
++	u8         tunnel_stateless_mpls_over_udp[0x1];
++	u8         tunnel_stateless_mpls_over_gre[0x1];
++	u8         tunnel_stateless_vxlan_gpe[0x1];
++	u8         tunnel_stateless_ipv4_over_vxlan[0x1];
+ 	u8         tunnel_stateless_ip_over_ip[0x1];
+ 	u8         reserved_at_2a[0x6];
+ 	u8         max_vxlan_udp_ports[0x8];
+diff --git a/mm/slub.c b/mm/slub.c
+index 6589b41d5a60..3b17e774831a 100644
+--- a/mm/slub.c
++++ b/mm/slub.c
+@@ -259,7 +259,7 @@ static inline void *freelist_ptr(const struct kmem_cache *s, void *ptr,
+ 	 * freepointer to be restored incorrectly.
+ 	 */
+ 	return (void *)((unsigned long)ptr ^ s->random ^
+-			(unsigned long)kasan_reset_tag((void *)ptr_addr));
++			swab((unsigned long)kasan_reset_tag((void *)ptr_addr)));
+ #else
+ 	return ptr;
+ #endif
+diff --git a/net/bluetooth/rfcomm/tty.c b/net/bluetooth/rfcomm/tty.c
+index 0c7d31c6c18c..a58584949a95 100644
+--- a/net/bluetooth/rfcomm/tty.c
++++ b/net/bluetooth/rfcomm/tty.c
+@@ -413,10 +413,8 @@ static int __rfcomm_create_dev(struct sock *sk, void __user *arg)
+ 		dlc = rfcomm_dlc_exists(&req.src, &req.dst, req.channel);
+ 		if (IS_ERR(dlc))
+ 			return PTR_ERR(dlc);
+-		else if (dlc) {
+-			rfcomm_dlc_put(dlc);
++		if (dlc)
+ 			return -EBUSY;
+-		}
+ 		dlc = rfcomm_dlc_alloc(GFP_KERNEL);
+ 		if (!dlc)
+ 			return -ENOMEM;
+diff --git a/net/ipv6/addrconf.c b/net/ipv6/addrconf.c
+index 46d614b611db..2a8175de8578 100644
+--- a/net/ipv6/addrconf.c
++++ b/net/ipv6/addrconf.c
+@@ -3296,6 +3296,10 @@ static void addrconf_addr_gen(struct inet6_dev *idev, bool prefix_route)
+ 	if (netif_is_l3_master(idev->dev))
+ 		return;
+ 
++	/* no link local addresses on devices flagged as slaves */
++	if (idev->dev->flags & IFF_SLAVE)
++		return;
++
+ 	ipv6_addr_set(&addr, htonl(0xFE800000), 0, 0, 0);
+ 
+ 	switch (idev->cnf.addr_gen_mode) {
+diff --git a/net/sched/cls_tcindex.c b/net/sched/cls_tcindex.c
+index 9904299424a1..61e95029c18f 100644
+--- a/net/sched/cls_tcindex.c
++++ b/net/sched/cls_tcindex.c
+@@ -11,6 +11,7 @@
+ #include <linux/skbuff.h>
+ #include <linux/errno.h>
+ #include <linux/slab.h>
++#include <linux/refcount.h>
+ #include <net/act_api.h>
+ #include <net/netlink.h>
+ #include <net/pkt_cls.h>
+@@ -26,9 +27,12 @@
+ #define DEFAULT_HASH_SIZE	64	/* optimized for diffserv */
+ 
+ 
++struct tcindex_data;
++
+ struct tcindex_filter_result {
+ 	struct tcf_exts		exts;
+ 	struct tcf_result	res;
++	struct tcindex_data	*p;
+ 	struct rcu_work		rwork;
+ };
+ 
+@@ -49,6 +53,7 @@ struct tcindex_data {
+ 	u32 hash;		/* hash table size; 0 if undefined */
+ 	u32 alloc_hash;		/* allocated size */
+ 	u32 fall_through;	/* 0: only classify if explicit match */
++	refcount_t refcnt;	/* a temporary refcnt for perfect hash */
+ 	struct rcu_work rwork;
+ };
+ 
+@@ -57,6 +62,20 @@ static inline int tcindex_filter_is_set(struct tcindex_filter_result *r)
+ 	return tcf_exts_has_actions(&r->exts) || r->res.classid;
+ }
+ 
++static void tcindex_data_get(struct tcindex_data *p)
++{
++	refcount_inc(&p->refcnt);
++}
++
++static void tcindex_data_put(struct tcindex_data *p)
++{
++	if (refcount_dec_and_test(&p->refcnt)) {
++		kfree(p->perfect);
++		kfree(p->h);
++		kfree(p);
++	}
++}
++
+ static struct tcindex_filter_result *tcindex_lookup(struct tcindex_data *p,
+ 						    u16 key)
+ {
+@@ -132,6 +151,7 @@ static int tcindex_init(struct tcf_proto *tp)
+ 	p->mask = 0xffff;
+ 	p->hash = DEFAULT_HASH_SIZE;
+ 	p->fall_through = 1;
++	refcount_set(&p->refcnt, 1); /* Paired with tcindex_destroy_work() */
+ 
+ 	rcu_assign_pointer(tp->root, p);
+ 	return 0;
+@@ -141,6 +161,7 @@ static void __tcindex_destroy_rexts(struct tcindex_filter_result *r)
+ {
+ 	tcf_exts_destroy(&r->exts);
+ 	tcf_exts_put_net(&r->exts);
++	tcindex_data_put(r->p);
+ }
+ 
+ static void tcindex_destroy_rexts_work(struct work_struct *work)
+@@ -212,6 +233,8 @@ found:
+ 		else
+ 			__tcindex_destroy_fexts(f);
+ 	} else {
++		tcindex_data_get(p);
++
+ 		if (tcf_exts_get_net(&r->exts))
+ 			tcf_queue_work(&r->rwork, tcindex_destroy_rexts_work);
+ 		else
+@@ -228,9 +251,7 @@ static void tcindex_destroy_work(struct work_struct *work)
+ 					      struct tcindex_data,
+ 					      rwork);
+ 
+-	kfree(p->perfect);
+-	kfree(p->h);
+-	kfree(p);
++	tcindex_data_put(p);
+ }
+ 
+ static inline int
+@@ -248,9 +269,11 @@ static const struct nla_policy tcindex_policy[TCA_TCINDEX_MAX + 1] = {
+ };
+ 
+ static int tcindex_filter_result_init(struct tcindex_filter_result *r,
++				      struct tcindex_data *p,
+ 				      struct net *net)
+ {
+ 	memset(r, 0, sizeof(*r));
++	r->p = p;
+ 	return tcf_exts_init(&r->exts, net, TCA_TCINDEX_ACT,
+ 			     TCA_TCINDEX_POLICE);
+ }
+@@ -290,6 +313,7 @@ static int tcindex_alloc_perfect_hash(struct net *net, struct tcindex_data *cp)
+ 				    TCA_TCINDEX_ACT, TCA_TCINDEX_POLICE);
+ 		if (err < 0)
+ 			goto errout;
++		cp->perfect[i].p = cp;
+ 	}
+ 
+ 	return 0;
+@@ -334,6 +358,7 @@ tcindex_set_parms(struct net *net, struct tcf_proto *tp, unsigned long base,
+ 	cp->alloc_hash = p->alloc_hash;
+ 	cp->fall_through = p->fall_through;
+ 	cp->tp = tp;
++	refcount_set(&cp->refcnt, 1); /* Paired with tcindex_destroy_work() */
+ 
+ 	if (tb[TCA_TCINDEX_HASH])
+ 		cp->hash = nla_get_u32(tb[TCA_TCINDEX_HASH]);
+@@ -366,7 +391,7 @@ tcindex_set_parms(struct net *net, struct tcf_proto *tp, unsigned long base,
+ 	}
+ 	cp->h = p->h;
+ 
+-	err = tcindex_filter_result_init(&new_filter_result, net);
++	err = tcindex_filter_result_init(&new_filter_result, cp, net);
+ 	if (err < 0)
+ 		goto errout_alloc;
+ 	if (old_r)
+@@ -434,7 +459,7 @@ tcindex_set_parms(struct net *net, struct tcf_proto *tp, unsigned long base,
+ 			goto errout_alloc;
+ 		f->key = handle;
+ 		f->next = NULL;
+-		err = tcindex_filter_result_init(&f->result, net);
++		err = tcindex_filter_result_init(&f->result, cp, net);
+ 		if (err < 0) {
+ 			kfree(f);
+ 			goto errout_alloc;
+@@ -447,7 +472,7 @@ tcindex_set_parms(struct net *net, struct tcf_proto *tp, unsigned long base,
+ 	}
+ 
+ 	if (old_r && old_r != r) {
+-		err = tcindex_filter_result_init(old_r, net);
++		err = tcindex_filter_result_init(old_r, cp, net);
+ 		if (err < 0) {
+ 			kfree(f);
+ 			goto errout_alloc;
+@@ -571,6 +596,14 @@ static void tcindex_destroy(struct tcf_proto *tp, bool rtnl_held,
+ 		for (i = 0; i < p->hash; i++) {
+ 			struct tcindex_filter_result *r = p->perfect + i;
+ 
++			/* tcf_queue_work() does not guarantee the ordering we
++			 * want, so we have to take this refcnt temporarily to
++			 * ensure 'p' is freed after all tcindex_filter_result
++			 * here. Imperfect hash does not need this, because it
++			 * uses linked lists rather than an array.
++			 */
++			tcindex_data_get(p);
++
+ 			tcf_unbind_filter(tp, &r->res);
+ 			if (tcf_exts_get_net(&r->exts))
+ 				tcf_queue_work(&r->rwork,
+diff --git a/sound/soc/codecs/tas2562.c b/sound/soc/codecs/tas2562.c
+index be52886a5edb..fb2233ca9103 100644
+--- a/sound/soc/codecs/tas2562.c
++++ b/sound/soc/codecs/tas2562.c
+@@ -409,7 +409,7 @@ static const struct snd_kcontrol_new vsense_switch =
+ 			1, 1);
+ 
+ static const struct snd_kcontrol_new tas2562_snd_controls[] = {
+-	SOC_SINGLE_TLV("Amp Gain Volume", TAS2562_PB_CFG1, 0, 0x1c, 0,
++	SOC_SINGLE_TLV("Amp Gain Volume", TAS2562_PB_CFG1, 1, 0x1c, 0,
+ 		       tas2562_dac_tlv),
+ };
+ 
+diff --git a/sound/soc/jz4740/jz4740-i2s.c b/sound/soc/jz4740/jz4740-i2s.c
+index 9d5405881209..434737b2b2b2 100644
+--- a/sound/soc/jz4740/jz4740-i2s.c
++++ b/sound/soc/jz4740/jz4740-i2s.c
+@@ -83,7 +83,7 @@
+ #define JZ_AIC_I2S_STATUS_BUSY BIT(2)
+ 
+ #define JZ_AIC_CLK_DIV_MASK 0xf
+-#define I2SDIV_DV_SHIFT 8
++#define I2SDIV_DV_SHIFT 0
+ #define I2SDIV_DV_MASK (0xf << I2SDIV_DV_SHIFT)
+ #define I2SDIV_IDV_SHIFT 8
+ #define I2SDIV_IDV_MASK (0xf << I2SDIV_IDV_SHIFT)
+diff --git a/tools/accounting/getdelays.c b/tools/accounting/getdelays.c
+index 8cb504d30384..5ef1c15e88ad 100644
+--- a/tools/accounting/getdelays.c
++++ b/tools/accounting/getdelays.c
+@@ -136,7 +136,7 @@ static int send_cmd(int sd, __u16 nlmsg_type, __u32 nlmsg_pid,
+ 	msg.g.version = 0x1;
+ 	na = (struct nlattr *) GENLMSG_DATA(&msg);
+ 	na->nla_type = nla_type;
+-	na->nla_len = nla_len + 1 + NLA_HDRLEN;
++	na->nla_len = nla_len + NLA_HDRLEN;
+ 	memcpy(NLA_DATA(na), nla_data, nla_len);
+ 	msg.n.nlmsg_len += NLMSG_ALIGN(na->nla_len);
+ 


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [gentoo-commits] proj/linux-patches:5.6 commit in: /
@ 2020-04-15 15:40 Mike Pagano
  0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2020-04-15 15:40 UTC (permalink / raw
  To: gentoo-commits

commit:     5011f8bae7898661bacd5c796876df32f84ea2d8
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Apr 15 15:19:42 2020 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Apr 15 15:40:12 2020 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=5011f8ba

Update distro Kconfig to support needed options for elogind

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 4567_distro-Gentoo-Kconfig.patch | 15 ++++++++++-----
 1 file changed, 10 insertions(+), 5 deletions(-)

diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index 20b9f54..581cb20 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -1,14 +1,14 @@
---- a/Kconfig	2019-12-30 16:37:13.825731109 -0500
-+++ b/Kconfig	2019-12-30 16:36:59.575609049 -0500
+--- a/Kconfig	2020-04-15 11:05:30.202413863 -0400
++++ b/Kconfig	2020-04-15 10:37:45.683952949 -0400
 @@ -32,3 +32,5 @@ source "lib/Kconfig"
  source "lib/Kconfig.debug"
  
  source "Documentation/Kconfig"
 +
 +source "distro/Kconfig"
---- /dev/null	2019-12-30 10:19:12.810163556 -0500
-+++ b/distro/Kconfig	2019-12-30 16:42:52.928524222 -0500
-@@ -0,0 +1,151 @@
+--- /dev/null	2020-04-15 02:49:37.900191585 -0400
++++ b/distro/Kconfig	2020-04-15 11:07:10.952929540 -0400
+@@ -0,0 +1,156 @@
 +menu "Gentoo Linux"
 +
 +config GENTOO_LINUX
@@ -91,7 +91,12 @@
 +	depends on GENTOO_LINUX
 +
 +	select BINFMT_SCRIPT
++	select CGROUPS
++	select EPOLL
 +	select FILE_LOCKING
++	select INOTIFY_USER
++	select SIGNALFD
++	select TIMERFD
 +
 +	help
 +		The init system is the first thing that loads after the kernel booted.


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [gentoo-commits] proj/linux-patches:5.6 commit in: /
@ 2020-04-17 14:50 Mike Pagano
  0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2020-04-17 14:50 UTC (permalink / raw
  To: gentoo-commits

commit:     9ec27069752c86b01959df8d9eeb37a32e537286
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Fri Apr 17 14:49:48 2020 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Fri Apr 17 14:49:48 2020 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=9ec27069

Linux patch 5.6.5

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README            |     4 +
 1004_linux-5.6.5.patch | 10033 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 10037 insertions(+)

diff --git a/0000_README b/0000_README
index 4f1ee49..7f000bc 100644
--- a/0000_README
+++ b/0000_README
@@ -59,6 +59,10 @@ Patch:  1003_linux-5.6.4.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.6.4
 
+Patch:  1004_linux-5.6.5.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.6.5
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1004_linux-5.6.5.patch b/1004_linux-5.6.5.patch
new file mode 100644
index 0000000..a55054c
--- /dev/null
+++ b/1004_linux-5.6.5.patch
@@ -0,0 +1,10033 @@
+diff --git a/Documentation/admin-guide/sysctl/user.rst b/Documentation/admin-guide/sysctl/user.rst
+index 650eaa03f15e..c45824589339 100644
+--- a/Documentation/admin-guide/sysctl/user.rst
++++ b/Documentation/admin-guide/sysctl/user.rst
+@@ -65,6 +65,12 @@ max_pid_namespaces
+   The maximum number of pid namespaces that any user in the current
+   user namespace may create.
+ 
++max_time_namespaces
++===================
++
++  The maximum number of time namespaces that any user in the current
++  user namespace may create.
++
+ max_user_namespaces
+ ===================
+ 
+diff --git a/Documentation/sound/hd-audio/index.rst b/Documentation/sound/hd-audio/index.rst
+index f8a72ffffe66..6e12de9fc34e 100644
+--- a/Documentation/sound/hd-audio/index.rst
++++ b/Documentation/sound/hd-audio/index.rst
+@@ -8,3 +8,4 @@ HD-Audio
+    models
+    controls
+    dp-mst
++   realtek-pc-beep
+diff --git a/Documentation/sound/hd-audio/models.rst b/Documentation/sound/hd-audio/models.rst
+index 11298f0ce44d..0ea967d34583 100644
+--- a/Documentation/sound/hd-audio/models.rst
++++ b/Documentation/sound/hd-audio/models.rst
+@@ -216,8 +216,6 @@ alc298-dell-aio
+     ALC298 fixups on Dell AIO machines
+ alc275-dell-xps
+     ALC275 fixups on Dell XPS models
+-alc256-dell-xps13
+-    ALC256 fixups on Dell XPS13
+ lenovo-spk-noise
+     Workaround for speaker noise on Lenovo machines
+ lenovo-hotkey
+diff --git a/Documentation/sound/hd-audio/realtek-pc-beep.rst b/Documentation/sound/hd-audio/realtek-pc-beep.rst
+new file mode 100644
+index 000000000000..be47c6f76a6e
+--- /dev/null
++++ b/Documentation/sound/hd-audio/realtek-pc-beep.rst
+@@ -0,0 +1,129 @@
++===============================
++Realtek PC Beep Hidden Register
++===============================
++
++This file documents the "PC Beep Hidden Register", which is present in certain
++Realtek HDA codecs and controls a muxer and pair of passthrough mixers that can
++route audio between pins but aren't themselves exposed as HDA widgets. As far
++as I can tell, these hidden routes are designed to allow flexible PC Beep output
++for codecs that don't have mixer widgets in their output paths. Why it's easier
++to hide a mixer behind an undocumented vendor register than to just expose it
++as a widget, I have no idea.
++
++Register Description
++====================
++
++The register is accessed via processing coefficient 0x36 on NID 20h. Bits not
++identified below have no discernible effect on my machine, a Dell XPS 13 9350::
++
++  MSB                           LSB
++  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
++  | |h|S|L|         | B |R|       | Known bits
++  +=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+
++  |0|0|1|1|  0x7  |0|0x0|1|  0x7  | Reset value
++  +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
++
++1Ah input select (B): 2 bits
++  When zero, expose the PC Beep line (from the internal beep generator, when
++  enabled with the Set Beep Generation verb on NID 01h, or else from the
++  external PCBEEP pin) on the 1Ah pin node. When nonzero, expose the headphone
++  jack (or possibly Line In on some machines) input instead. If PC Beep is
++  selected, the 1Ah boost control has no effect.
++
++Amplify 1Ah loopback, left (L): 1 bit
++  Amplify the left channel of 1Ah before mixing it into outputs as specified
++  by h and S bits. Does not affect the level of 1Ah exposed to other widgets.
++
++Amplify 1Ah loopback, right (R): 1 bit
++  Amplify the right channel of 1Ah before mixing it into outputs as specified
++  by h and S bits. Does not affect the level of 1Ah exposed to other widgets.
++
++Loopback 1Ah to 21h [active low] (h): 1 bit
++  When zero, mix 1Ah (possibly with amplification, depending on L and R bits)
++  into 21h (headphone jack on my machine). Mixed signal respects the mute
++  setting on 21h.
++
++Loopback 1Ah to 14h (S): 1 bit
++  When one, mix 1Ah (possibly with amplification, depending on L and R bits)
++  into 14h (internal speaker on my machine). Mixed signal **ignores** the mute
++  setting on 14h and is present whenever 14h is configured as an output.
++
++Path diagrams
++=============
++
++1Ah input selection (DIV is the PC Beep divider set on NID 01h)::
++
++  <Beep generator>   <PCBEEP pin>    <Headphone jack>
++          |                |                |
++          +--DIV--+--!DIV--+       {1Ah boost control}
++                  |                         |
++                  +--(b == 0)--+--(b != 0)--+
++                               |
++               >1Ah (Beep/Headphone Mic/Line In)<
++
++Loopback of 1Ah to 21h/14h::
++
++               <1Ah (Beep/Headphone Mic/Line In)>
++                               |
++                        {amplify if L/R}
++                               |
++                  +-----!h-----+-----S-----+
++                  |                        |
++          {21h mute control}               |
++                  |                        |
++          >21h (Headphone)<     >14h (Internal Speaker)<
++
++Background
++==========
++
++All Realtek HDA codecs have a vendor-defined widget with node ID 20h which
++provides access to a bank of registers that control various codec functions.
++Registers are read and written via the standard HDA processing coefficient
++verbs (Set/Get Coefficient Index, Set/Get Processing Coefficient). The node is
++named "Realtek Vendor Registers" in public datasheets' verb listings and,
++apart from that, is entirely undocumented.
++
++This particular register, exposed at coefficient 0x36 and named in commits from
++Realtek, is of note: unlike most registers, which seem to control detailed
++amplifier parameters not in scope of the HDA specification, it controls audio
++routing which could just as easily have been defined using standard HDA mixer
++and selector widgets.
++
++Specifically, it selects between two sources for the input pin widget with Node
++ID (NID) 1Ah: the widget's signal can come either from an audio jack (on my
++laptop, a Dell XPS 13 9350, it's the headphone jack, but comments in Realtek
++commits indicate that it might be a Line In on some machines) or from the PC
++Beep line (which is itself multiplexed between the codec's internal beep
++generator and external PCBEEP pin, depending on if the beep generator is
++enabled via verbs on NID 01h). Additionally, it can mix (with optional
++amplification) that signal onto the 21h and/or 14h output pins.
++
++The register's reset value is 0x3717, corresponding to PC Beep on 1Ah that is
++then amplified and mixed into both the headphones and the speakers. Not only
++does this violate the HDA specification, which says that "[a vendor defined
++beep input pin] connection may be maintained *only* while the Link reset
++(**RST#**) is asserted", it means that we cannot ignore the register if we care
++about the input that 1Ah would otherwise expose or if the PCBEEP trace is
++poorly shielded and picks up chassis noise (both of which are the case on my
++machine).
++
++Unfortunately, there are lots of ways to get this register configuration wrong.
++Linux, it seems, has gone through most of them. For one, the register resets
++after S3 suspend: judging by existing code, this isn't the case for all vendor
++registers, and it's led to some fixes that improve behavior on cold boot but
++don't last after suspend. Other fixes have successfully switched the 1Ah input
++away from PC Beep but have failed to disable both loopback paths. On my
++machine, this means that the headphone input is amplified and looped back to
++the headphone output, which uses the exact same pins! As you might expect, this
++causes terrible headphone noise, the character of which is controlled by the
++1Ah boost control. (If you've seen instructions online to fix XPS 13 headphone
++noise by changing "Headphone Mic Boost" in ALSA, now you know why.)
++
++The information here has been obtained through black-box reverse engineering of
++the ALC256 codec's behavior and is not guaranteed to be correct. It likely
++also applies for the ALC255, ALC257, ALC235, and ALC236, since those codecs
++seem to be close relatives of the ALC256. (They all share one initialization
++function.) Additionally, other codecs like the ALC225 and ALC285 also have this
++register, judging by existing fixups in ``patch_realtek.c``, but specific
++data (e.g. node IDs, bit positions, pin mappings) for those codecs may differ
++from what I've described here.
+diff --git a/Makefile b/Makefile
+index 0a7e41471838..0d7098842d56 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 6
+-SUBLEVEL = 4
++SUBLEVEL = 5
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+ 
+diff --git a/arch/arm/boot/dts/exynos4210-universal_c210.dts b/arch/arm/boot/dts/exynos4210-universal_c210.dts
+index a1bdf7830a87..9dda6bdb9253 100644
+--- a/arch/arm/boot/dts/exynos4210-universal_c210.dts
++++ b/arch/arm/boot/dts/exynos4210-universal_c210.dts
+@@ -115,7 +115,7 @@
+ 		gpio-sck = <&gpy3 1 GPIO_ACTIVE_HIGH>;
+ 		gpio-mosi = <&gpy3 3 GPIO_ACTIVE_HIGH>;
+ 		num-chipselects = <1>;
+-		cs-gpios = <&gpy4 3 GPIO_ACTIVE_HIGH>;
++		cs-gpios = <&gpy4 3 GPIO_ACTIVE_LOW>;
+ 
+ 		lcd@0 {
+ 			compatible = "samsung,ld9040";
+@@ -124,8 +124,6 @@
+ 			vci-supply = <&ldo17_reg>;
+ 			reset-gpios = <&gpy4 5 GPIO_ACTIVE_HIGH>;
+ 			spi-max-frequency = <1200000>;
+-			spi-cpol;
+-			spi-cpha;
+ 			power-on-delay = <10>;
+ 			reset-delay = <10>;
+ 			panel-width-mm = <90>;
+diff --git a/arch/arm64/Makefile b/arch/arm64/Makefile
+index dca1a97751ab..4e6ce2d9196e 100644
+--- a/arch/arm64/Makefile
++++ b/arch/arm64/Makefile
+@@ -65,6 +65,10 @@ stack_protector_prepare: prepare0
+ 					include/generated/asm-offsets.h))
+ endif
+ 
++# Ensure that if the compiler supports branch protection we default it
++# off.
++KBUILD_CFLAGS += $(call cc-option,-mbranch-protection=none)
++
+ ifeq ($(CONFIG_CPU_BIG_ENDIAN), y)
+ KBUILD_CPPFLAGS	+= -mbig-endian
+ CHECKFLAGS	+= -D__AARCH64EB__
+diff --git a/arch/arm64/boot/dts/allwinner/sun50i-h5.dtsi b/arch/arm64/boot/dts/allwinner/sun50i-h5.dtsi
+index 9893aa64dd0b..4462a68c0681 100644
+--- a/arch/arm64/boot/dts/allwinner/sun50i-h5.dtsi
++++ b/arch/arm64/boot/dts/allwinner/sun50i-h5.dtsi
+@@ -38,8 +38,7 @@
+ 	};
+ 
+ 	pmu {
+-		compatible = "arm,cortex-a53-pmu",
+-			     "arm,armv8-pmuv3";
++		compatible = "arm,cortex-a53-pmu";
+ 		interrupts = <GIC_SPI 116 IRQ_TYPE_LEVEL_HIGH>,
+ 			     <GIC_SPI 117 IRQ_TYPE_LEVEL_HIGH>,
+ 			     <GIC_SPI 118 IRQ_TYPE_LEVEL_HIGH>,
+diff --git a/arch/arm64/boot/dts/allwinner/sun50i-h6.dtsi b/arch/arm64/boot/dts/allwinner/sun50i-h6.dtsi
+index 3329283e38ab..06363c1bea3f 100644
+--- a/arch/arm64/boot/dts/allwinner/sun50i-h6.dtsi
++++ b/arch/arm64/boot/dts/allwinner/sun50i-h6.dtsi
+@@ -70,8 +70,7 @@
+ 	};
+ 
+ 	pmu {
+-		compatible = "arm,cortex-a53-pmu",
+-			     "arm,armv8-pmuv3";
++		compatible = "arm,cortex-a53-pmu";
+ 		interrupts = <GIC_SPI 140 IRQ_TYPE_LEVEL_HIGH>,
+ 			     <GIC_SPI 141 IRQ_TYPE_LEVEL_HIGH>,
+ 			     <GIC_SPI 142 IRQ_TYPE_LEVEL_HIGH>,
+diff --git a/arch/arm64/boot/dts/amlogic/meson-gxbb-kii-pro.dts b/arch/arm64/boot/dts/amlogic/meson-gxbb-kii-pro.dts
+index 2f1f829450a2..6c9cc45fb417 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-gxbb-kii-pro.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-gxbb-kii-pro.dts
+@@ -76,3 +76,7 @@
+ 		};
+ 	};
+ };
++
++&ir {
++	linux,rc-map-name = "rc-videostrong-kii-pro";
++};
+diff --git a/arch/arm64/boot/dts/ti/k3-am65-main.dtsi b/arch/arm64/boot/dts/ti/k3-am65-main.dtsi
+index e5df20a2d2f9..d86c5c7b82fc 100644
+--- a/arch/arm64/boot/dts/ti/k3-am65-main.dtsi
++++ b/arch/arm64/boot/dts/ti/k3-am65-main.dtsi
+@@ -296,6 +296,7 @@
+ 		interrupts = <GIC_SPI 97 IRQ_TYPE_LEVEL_HIGH>;
+ 		dma-coherent;
+ 		power-domains = <&k3_pds 151 TI_SCI_PD_EXCLUSIVE>;
++		clocks = <&k3_clks 151 2>, <&k3_clks 151 7>;
+ 		assigned-clocks = <&k3_clks 151 2>, <&k3_clks 151 7>;
+ 		assigned-clock-parents = <&k3_clks 151 4>,	/* set REF_CLK to 20MHz i.e. PER0_PLL/48 */
+ 					 <&k3_clks 151 9>;	/* set PIPE3_TXB_CLK to CLK_12M_RC/256 (for HS only) */
+@@ -335,6 +336,7 @@
+ 		interrupts = <GIC_SPI 117 IRQ_TYPE_LEVEL_HIGH>;
+ 		dma-coherent;
+ 		power-domains = <&k3_pds 152 TI_SCI_PD_EXCLUSIVE>;
++		clocks = <&k3_clks 152 2>;
+ 		assigned-clocks = <&k3_clks 152 2>;
+ 		assigned-clock-parents = <&k3_clks 152 4>;	/* set REF_CLK to 20MHz i.e. PER0_PLL/48 */
+ 
+diff --git a/arch/arm64/kernel/armv8_deprecated.c b/arch/arm64/kernel/armv8_deprecated.c
+index 7832b3216370..950864bb0f9a 100644
+--- a/arch/arm64/kernel/armv8_deprecated.c
++++ b/arch/arm64/kernel/armv8_deprecated.c
+@@ -601,7 +601,7 @@ static struct undef_hook setend_hooks[] = {
+ 	},
+ 	{
+ 		/* Thumb mode */
+-		.instr_mask	= 0x0000fff7,
++		.instr_mask	= 0xfffffff7,
+ 		.instr_val	= 0x0000b650,
+ 		.pstate_mask	= (PSR_AA32_T_BIT | PSR_AA32_MODE_MASK),
+ 		.pstate_val	= (PSR_AA32_T_BIT | PSR_AA32_MODE_USR),
+diff --git a/arch/arm64/mm/ptdump_debugfs.c b/arch/arm64/mm/ptdump_debugfs.c
+index 1f2eae3e988b..d29d722ec3ec 100644
+--- a/arch/arm64/mm/ptdump_debugfs.c
++++ b/arch/arm64/mm/ptdump_debugfs.c
+@@ -1,5 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0
+ #include <linux/debugfs.h>
++#include <linux/memory_hotplug.h>
+ #include <linux/seq_file.h>
+ 
+ #include <asm/ptdump.h>
+@@ -7,7 +8,10 @@
+ static int ptdump_show(struct seq_file *m, void *v)
+ {
+ 	struct ptdump_info *info = m->private;
++
++	get_online_mems();
+ 	ptdump_walk(m, info);
++	put_online_mems();
+ 	return 0;
+ }
+ DEFINE_SHOW_ATTRIBUTE(ptdump);
+diff --git a/arch/mips/cavium-octeon/octeon-irq.c b/arch/mips/cavium-octeon/octeon-irq.c
+index 6bd1e97effdf..6501a842c41a 100644
+--- a/arch/mips/cavium-octeon/octeon-irq.c
++++ b/arch/mips/cavium-octeon/octeon-irq.c
+@@ -2199,6 +2199,9 @@ static int octeon_irq_cib_map(struct irq_domain *d,
+ 	}
+ 
+ 	cd = kzalloc(sizeof(*cd), GFP_KERNEL);
++	if (!cd)
++		return -ENOMEM;
++
+ 	cd->host_data = host_data;
+ 	cd->bit = hw;
+ 
+diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c
+index 344e6e9ea43b..da407cdc2135 100644
+--- a/arch/mips/mm/tlbex.c
++++ b/arch/mips/mm/tlbex.c
+@@ -1480,6 +1480,7 @@ static void build_r4000_tlb_refill_handler(void)
+ 
+ static void setup_pw(void)
+ {
++	unsigned int pwctl;
+ 	unsigned long pgd_i, pgd_w;
+ #ifndef __PAGETABLE_PMD_FOLDED
+ 	unsigned long pmd_i, pmd_w;
+@@ -1506,6 +1507,7 @@ static void setup_pw(void)
+ 
+ 	pte_i = ilog2(_PAGE_GLOBAL);
+ 	pte_w = 0;
++	pwctl = 1 << 30; /* Set PWDirExt */
+ 
+ #ifndef __PAGETABLE_PMD_FOLDED
+ 	write_c0_pwfield(pgd_i << 24 | pmd_i << 12 | pt_i << 6 | pte_i);
+@@ -1516,8 +1518,9 @@ static void setup_pw(void)
+ #endif
+ 
+ #ifdef CONFIG_MIPS_HUGE_TLB_SUPPORT
+-	write_c0_pwctl(1 << 6 | psn);
++	pwctl |= (1 << 6 | psn);
+ #endif
++	write_c0_pwctl(pwctl);
+ 	write_c0_kpgd((long)swapper_pg_dir);
+ 	kscratch_used_mask |= (1 << 7); /* KScratch6 is used for KPGD */
+ }
+diff --git a/arch/powerpc/include/asm/book3s/64/hash-4k.h b/arch/powerpc/include/asm/book3s/64/hash-4k.h
+index 8fd8599c9395..3f9ae3585ab9 100644
+--- a/arch/powerpc/include/asm/book3s/64/hash-4k.h
++++ b/arch/powerpc/include/asm/book3s/64/hash-4k.h
+@@ -156,6 +156,12 @@ extern pmd_t hash__pmdp_huge_get_and_clear(struct mm_struct *mm,
+ extern int hash__has_transparent_hugepage(void);
+ #endif
+ 
++static inline pmd_t hash__pmd_mkdevmap(pmd_t pmd)
++{
++	BUG();
++	return pmd;
++}
++
+ #endif /* !__ASSEMBLY__ */
+ 
+ #endif /* _ASM_POWERPC_BOOK3S_64_HASH_4K_H */
+diff --git a/arch/powerpc/include/asm/book3s/64/hash-64k.h b/arch/powerpc/include/asm/book3s/64/hash-64k.h
+index d1d9177d9ebd..0729c034e56f 100644
+--- a/arch/powerpc/include/asm/book3s/64/hash-64k.h
++++ b/arch/powerpc/include/asm/book3s/64/hash-64k.h
+@@ -246,7 +246,7 @@ static inline void mark_hpte_slot_valid(unsigned char *hpte_slot_array,
+  */
+ static inline int hash__pmd_trans_huge(pmd_t pmd)
+ {
+-	return !!((pmd_val(pmd) & (_PAGE_PTE | H_PAGE_THP_HUGE)) ==
++	return !!((pmd_val(pmd) & (_PAGE_PTE | H_PAGE_THP_HUGE | _PAGE_DEVMAP)) ==
+ 		  (_PAGE_PTE | H_PAGE_THP_HUGE));
+ }
+ 
+@@ -272,6 +272,12 @@ extern pmd_t hash__pmdp_huge_get_and_clear(struct mm_struct *mm,
+ 				       unsigned long addr, pmd_t *pmdp);
+ extern int hash__has_transparent_hugepage(void);
+ #endif /*  CONFIG_TRANSPARENT_HUGEPAGE */
++
++static inline pmd_t hash__pmd_mkdevmap(pmd_t pmd)
++{
++	return __pmd(pmd_val(pmd) | (_PAGE_PTE | H_PAGE_THP_HUGE | _PAGE_DEVMAP));
++}
++
+ #endif	/* __ASSEMBLY__ */
+ 
+ #endif /* _ASM_POWERPC_BOOK3S_64_HASH_64K_H */
+diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
+index 201a69e6a355..368b136517e0 100644
+--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
++++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
+@@ -1303,7 +1303,9 @@ extern void serialize_against_pte_lookup(struct mm_struct *mm);
+ 
+ static inline pmd_t pmd_mkdevmap(pmd_t pmd)
+ {
+-	return __pmd(pmd_val(pmd) | (_PAGE_PTE | _PAGE_DEVMAP));
++	if (radix_enabled())
++		return radix__pmd_mkdevmap(pmd);
++	return hash__pmd_mkdevmap(pmd);
+ }
+ 
+ static inline int pmd_devmap(pmd_t pmd)
+diff --git a/arch/powerpc/include/asm/book3s/64/radix.h b/arch/powerpc/include/asm/book3s/64/radix.h
+index d97db3ad9aae..a1c60d5b50af 100644
+--- a/arch/powerpc/include/asm/book3s/64/radix.h
++++ b/arch/powerpc/include/asm/book3s/64/radix.h
+@@ -263,6 +263,11 @@ static inline int radix__has_transparent_hugepage(void)
+ }
+ #endif
+ 
++static inline pmd_t radix__pmd_mkdevmap(pmd_t pmd)
++{
++	return __pmd(pmd_val(pmd) | (_PAGE_PTE | _PAGE_DEVMAP));
++}
++
+ extern int __meminit radix__vmemmap_create_mapping(unsigned long start,
+ 					     unsigned long page_size,
+ 					     unsigned long phys);
+diff --git a/arch/powerpc/include/asm/drmem.h b/arch/powerpc/include/asm/drmem.h
+index 3d76e1c388c2..28c3d936fdf3 100644
+--- a/arch/powerpc/include/asm/drmem.h
++++ b/arch/powerpc/include/asm/drmem.h
+@@ -27,12 +27,12 @@ struct drmem_lmb_info {
+ extern struct drmem_lmb_info *drmem_info;
+ 
+ #define for_each_drmem_lmb_in_range(lmb, start, end)		\
+-	for ((lmb) = (start); (lmb) <= (end); (lmb)++)
++	for ((lmb) = (start); (lmb) < (end); (lmb)++)
+ 
+ #define for_each_drmem_lmb(lmb)					\
+ 	for_each_drmem_lmb_in_range((lmb),			\
+ 		&drmem_info->lmbs[0],				\
+-		&drmem_info->lmbs[drmem_info->n_lmbs - 1])
++		&drmem_info->lmbs[drmem_info->n_lmbs])
+ 
+ /*
+  * The of_drconf_cell_v1 struct defines the layout of the LMB data
+diff --git a/arch/powerpc/include/asm/setjmp.h b/arch/powerpc/include/asm/setjmp.h
+index e9f81bb3f83b..f798e80e4106 100644
+--- a/arch/powerpc/include/asm/setjmp.h
++++ b/arch/powerpc/include/asm/setjmp.h
+@@ -7,7 +7,9 @@
+ 
+ #define JMP_BUF_LEN    23
+ 
+-extern long setjmp(long *) __attribute__((returns_twice));
+-extern void longjmp(long *, long) __attribute__((noreturn));
++typedef long jmp_buf[JMP_BUF_LEN];
++
++extern int setjmp(jmp_buf env) __attribute__((returns_twice));
++extern void longjmp(jmp_buf env, int val) __attribute__((noreturn));
+ 
+ #endif /* _ASM_POWERPC_SETJMP_H */
+diff --git a/arch/powerpc/kernel/dt_cpu_ftrs.c b/arch/powerpc/kernel/dt_cpu_ftrs.c
+index 182b4047c1ef..36bc0d5c4f3a 100644
+--- a/arch/powerpc/kernel/dt_cpu_ftrs.c
++++ b/arch/powerpc/kernel/dt_cpu_ftrs.c
+@@ -139,7 +139,6 @@ static void __init cpufeatures_setup_cpu(void)
+ 	/* Initialize the base environment -- clear FSCR/HFSCR.  */
+ 	hv_mode = !!(mfmsr() & MSR_HV);
+ 	if (hv_mode) {
+-		/* CPU_FTR_HVMODE is used early in PACA setup */
+ 		cur_cpu_spec->cpu_features |= CPU_FTR_HVMODE;
+ 		mtspr(SPRN_HFSCR, 0);
+ 	}
+diff --git a/arch/powerpc/kernel/kprobes.c b/arch/powerpc/kernel/kprobes.c
+index 2d27ec4feee4..9b340af02c38 100644
+--- a/arch/powerpc/kernel/kprobes.c
++++ b/arch/powerpc/kernel/kprobes.c
+@@ -264,6 +264,9 @@ int kprobe_handler(struct pt_regs *regs)
+ 	if (user_mode(regs))
+ 		return 0;
+ 
++	if (!(regs->msr & MSR_IR) || !(regs->msr & MSR_DR))
++		return 0;
++
+ 	/*
+ 	 * We don't want to be preempted for the entire
+ 	 * duration of kprobe processing
+diff --git a/arch/powerpc/kernel/paca.c b/arch/powerpc/kernel/paca.c
+index 949eceb254d8..3f91ccaa9c74 100644
+--- a/arch/powerpc/kernel/paca.c
++++ b/arch/powerpc/kernel/paca.c
+@@ -176,7 +176,7 @@ static struct slb_shadow * __init new_slb_shadow(int cpu, unsigned long limit)
+ struct paca_struct **paca_ptrs __read_mostly;
+ EXPORT_SYMBOL(paca_ptrs);
+ 
+-void __init initialise_paca(struct paca_struct *new_paca, int cpu)
++void __init __nostackprotector initialise_paca(struct paca_struct *new_paca, int cpu)
+ {
+ #ifdef CONFIG_PPC_PSERIES
+ 	new_paca->lppaca_ptr = NULL;
+@@ -205,7 +205,7 @@ void __init initialise_paca(struct paca_struct *new_paca, int cpu)
+ }
+ 
+ /* Put the paca pointer into r13 and SPRG_PACA */
+-void setup_paca(struct paca_struct *new_paca)
++void __nostackprotector setup_paca(struct paca_struct *new_paca)
+ {
+ 	/* Setup r13 */
+ 	local_paca = new_paca;
+@@ -214,11 +214,15 @@ void setup_paca(struct paca_struct *new_paca)
+ 	/* On Book3E, initialize the TLB miss exception frames */
+ 	mtspr(SPRN_SPRG_TLB_EXFRAME, local_paca->extlb);
+ #else
+-	/* In HV mode, we setup both HPACA and PACA to avoid problems
++	/*
++	 * In HV mode, we setup both HPACA and PACA to avoid problems
+ 	 * if we do a GET_PACA() before the feature fixups have been
+-	 * applied
++	 * applied.
++	 *
++	 * Normally you should test against CPU_FTR_HVMODE, but CPU features
++	 * are not yet set up when we first reach here.
+ 	 */
+-	if (early_cpu_has_feature(CPU_FTR_HVMODE))
++	if (mfmsr() & MSR_HV)
+ 		mtspr(SPRN_SPRG_HPACA, local_paca);
+ #endif
+ 	mtspr(SPRN_SPRG_PACA, local_paca);
+diff --git a/arch/powerpc/kernel/setup.h b/arch/powerpc/kernel/setup.h
+index 2dd0d9cb5a20..2ec835574cc9 100644
+--- a/arch/powerpc/kernel/setup.h
++++ b/arch/powerpc/kernel/setup.h
+@@ -8,6 +8,12 @@
+ #ifndef __ARCH_POWERPC_KERNEL_SETUP_H
+ #define __ARCH_POWERPC_KERNEL_SETUP_H
+ 
++#ifdef CONFIG_CC_IS_CLANG
++#define __nostackprotector
++#else
++#define __nostackprotector __attribute__((__optimize__("no-stack-protector")))
++#endif
++
+ void initialize_cache_info(void);
+ void irqstack_early_init(void);
+ 
+diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
+index e05e6dd67ae6..438a9befce41 100644
+--- a/arch/powerpc/kernel/setup_64.c
++++ b/arch/powerpc/kernel/setup_64.c
+@@ -279,24 +279,42 @@ void __init record_spr_defaults(void)
+  * device-tree is not accessible via normal means at this point.
+  */
+ 
+-void __init early_setup(unsigned long dt_ptr)
++void __init __nostackprotector early_setup(unsigned long dt_ptr)
+ {
+ 	static __initdata struct paca_struct boot_paca;
+ 
+ 	/* -------- printk is _NOT_ safe to use here ! ------- */
+ 
+-	/* Try new device tree based feature discovery ... */
+-	if (!dt_cpu_ftrs_init(__va(dt_ptr)))
+-		/* Otherwise use the old style CPU table */
+-		identify_cpu(0, mfspr(SPRN_PVR));
+-
+-	/* Assume we're on cpu 0 for now. Don't write to the paca yet! */
++	/*
++	 * Assume we're on cpu 0 for now.
++	 *
++	 * We need to load a PACA very early for a few reasons.
++	 *
++	 * The stack protector canary is stored in the paca, so as soon as we
++	 * call any stack protected code we need r13 pointing somewhere valid.
++	 *
++	 * If we are using kcov it will call in_task() in its instrumentation,
++	 * which relies on the current task from the PACA.
++	 *
++	 * dt_cpu_ftrs_init() calls into generic OF/fdt code, as well as
++	 * printk(), which can trigger both stack protector and kcov.
++	 *
++	 * percpu variables and spin locks also use the paca.
++	 *
++	 * So set up a temporary paca. It will be replaced below once we know
++	 * what CPU we are on.
++	 */
+ 	initialise_paca(&boot_paca, 0);
+ 	setup_paca(&boot_paca);
+ 	fixup_boot_paca();
+ 
+ 	/* -------- printk is now safe to use ------- */
+ 
++	/* Try new device tree based feature discovery ... */
++	if (!dt_cpu_ftrs_init(__va(dt_ptr)))
++		/* Otherwise use the old style CPU table */
++		identify_cpu(0, mfspr(SPRN_PVR));
++
+ 	/* Enable early debugging if any specified (see udbg.h) */
+ 	udbg_early_init();
+ 
+diff --git a/arch/powerpc/kernel/signal_64.c b/arch/powerpc/kernel/signal_64.c
+index 84ed2e77ef9c..adfde59cf4ba 100644
+--- a/arch/powerpc/kernel/signal_64.c
++++ b/arch/powerpc/kernel/signal_64.c
+@@ -473,8 +473,10 @@ static long restore_tm_sigcontexts(struct task_struct *tsk,
+ 	err |= __get_user(tsk->thread.ckpt_regs.ccr,
+ 			  &sc->gp_regs[PT_CCR]);
+ 
++	/* Don't allow userspace to set the trap value */
++	regs->trap = 0;
++
+ 	/* These regs are not checkpointed; they can go in 'regs'. */
+-	err |= __get_user(regs->trap, &sc->gp_regs[PT_TRAP]);
+ 	err |= __get_user(regs->dar, &sc->gp_regs[PT_DAR]);
+ 	err |= __get_user(regs->dsisr, &sc->gp_regs[PT_DSISR]);
+ 	err |= __get_user(regs->result, &sc->gp_regs[PT_RESULT]);
+diff --git a/arch/powerpc/kexec/Makefile b/arch/powerpc/kexec/Makefile
+index 378f6108a414..86380c69f5ce 100644
+--- a/arch/powerpc/kexec/Makefile
++++ b/arch/powerpc/kexec/Makefile
+@@ -3,9 +3,6 @@
+ # Makefile for the linux kernel.
+ #
+ 
+-# Avoid clang warnings around longjmp/setjmp declarations
+-CFLAGS_crash.o += -ffreestanding
+-
+ obj-y				+= core.o crash.o core_$(BITS).o
+ 
+ obj-$(CONFIG_PPC32)		+= relocate_32.o
+diff --git a/arch/powerpc/kvm/book3s_hv_uvmem.c b/arch/powerpc/kvm/book3s_hv_uvmem.c
+index 79b1202b1c62..9d26614b2a77 100644
+--- a/arch/powerpc/kvm/book3s_hv_uvmem.c
++++ b/arch/powerpc/kvm/book3s_hv_uvmem.c
+@@ -806,6 +806,9 @@ out:
+ 
+ void kvmppc_uvmem_free(void)
+ {
++	if (!kvmppc_uvmem_bitmap)
++		return;
++
+ 	memunmap_pages(&kvmppc_uvmem_pgmap);
+ 	release_mem_region(kvmppc_uvmem_pgmap.res.start,
+ 			   resource_size(&kvmppc_uvmem_pgmap.res));
+diff --git a/arch/powerpc/mm/kasan/kasan_init_32.c b/arch/powerpc/mm/kasan/kasan_init_32.c
+index d2bed3fcb719..1169ad1b6730 100644
+--- a/arch/powerpc/mm/kasan/kasan_init_32.c
++++ b/arch/powerpc/mm/kasan/kasan_init_32.c
+@@ -101,7 +101,7 @@ static void __init kasan_remap_early_shadow_ro(void)
+ 
+ 	kasan_populate_pte(kasan_early_shadow_pte, prot);
+ 
+-	for (k_cur = k_start & PAGE_MASK; k_cur < k_end; k_cur += PAGE_SIZE) {
++	for (k_cur = k_start & PAGE_MASK; k_cur != k_end; k_cur += PAGE_SIZE) {
+ 		pmd_t *pmd = pmd_offset(pud_offset(pgd_offset_k(k_cur), k_cur), k_cur);
+ 		pte_t *ptep = pte_offset_kernel(pmd, k_cur);
+ 
+diff --git a/arch/powerpc/mm/nohash/tlb_low.S b/arch/powerpc/mm/nohash/tlb_low.S
+index 2ca407cedbe7..eaeee402f96e 100644
+--- a/arch/powerpc/mm/nohash/tlb_low.S
++++ b/arch/powerpc/mm/nohash/tlb_low.S
+@@ -397,7 +397,7 @@ _GLOBAL(set_context)
+  * extern void loadcam_entry(unsigned int index)
+  *
+  * Load TLBCAM[index] entry in to the L2 CAM MMU
+- * Must preserve r7, r8, r9, and r10
++ * Must preserve r7, r8, r9, r10 and r11
+  */
+ _GLOBAL(loadcam_entry)
+ 	mflr	r5
+@@ -433,6 +433,10 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_BIG_PHYS)
+  */
+ _GLOBAL(loadcam_multi)
+ 	mflr	r8
++	/* Don't switch to AS=1 if already there */
++	mfmsr	r11
++	andi.	r11,r11,MSR_IS
++	bne	10f
+ 
+ 	/*
+ 	 * Set up temporary TLB entry that is the same as what we're
+@@ -458,6 +462,7 @@ _GLOBAL(loadcam_multi)
+ 	mtmsr	r6
+ 	isync
+ 
++10:
+ 	mr	r9,r3
+ 	add	r10,r3,r4
+ 2:	bl	loadcam_entry
+@@ -466,6 +471,10 @@ _GLOBAL(loadcam_multi)
+ 	mr	r3,r9
+ 	blt	2b
+ 
++	/* Don't return to AS=0 if we were in AS=1 at function start */
++	andi.	r11,r11,MSR_IS
++	bne	3f
++
+ 	/* Return to AS=0 and clear the temporary entry */
+ 	mfmsr	r6
+ 	rlwinm.	r6,r6,0,~(MSR_IS|MSR_DS)
+@@ -481,6 +490,7 @@ _GLOBAL(loadcam_multi)
+ 	tlbwe
+ 	isync
+ 
++3:
+ 	mtlr	r8
+ 	blr
+ #endif
+diff --git a/arch/powerpc/platforms/pseries/hotplug-memory.c b/arch/powerpc/platforms/pseries/hotplug-memory.c
+index a4d40a3ceea3..fd22ec41c008 100644
+--- a/arch/powerpc/platforms/pseries/hotplug-memory.c
++++ b/arch/powerpc/platforms/pseries/hotplug-memory.c
+@@ -223,7 +223,7 @@ static int get_lmb_range(u32 drc_index, int n_lmbs,
+ 			 struct drmem_lmb **end_lmb)
+ {
+ 	struct drmem_lmb *lmb, *start, *end;
+-	struct drmem_lmb *last_lmb;
++	struct drmem_lmb *limit;
+ 
+ 	start = NULL;
+ 	for_each_drmem_lmb(lmb) {
+@@ -236,10 +236,10 @@ static int get_lmb_range(u32 drc_index, int n_lmbs,
+ 	if (!start)
+ 		return -EINVAL;
+ 
+-	end = &start[n_lmbs - 1];
++	end = &start[n_lmbs];
+ 
+-	last_lmb = &drmem_info->lmbs[drmem_info->n_lmbs - 1];
+-	if (end > last_lmb)
++	limit = &drmem_info->lmbs[drmem_info->n_lmbs];
++	if (end > limit)
+ 		return -EINVAL;
+ 
+ 	*start_lmb = start;
+diff --git a/arch/powerpc/sysdev/xive/common.c b/arch/powerpc/sysdev/xive/common.c
+index 9651ca061828..fe8d396e2301 100644
+--- a/arch/powerpc/sysdev/xive/common.c
++++ b/arch/powerpc/sysdev/xive/common.c
+@@ -68,13 +68,6 @@ static u32 xive_ipi_irq;
+ /* Xive state for each CPU */
+ static DEFINE_PER_CPU(struct xive_cpu *, xive_cpu);
+ 
+-/*
+- * A "disabled" interrupt should never fire, to catch problems
+- * we set its logical number to this
+- */
+-#define XIVE_BAD_IRQ		0x7fffffff
+-#define XIVE_MAX_IRQ		(XIVE_BAD_IRQ - 1)
+-
+ /* An invalid CPU target */
+ #define XIVE_INVALID_TARGET	(-1)
+ 
+@@ -265,11 +258,15 @@ notrace void xmon_xive_do_dump(int cpu)
+ 
+ int xmon_xive_get_irq_config(u32 hw_irq, struct irq_data *d)
+ {
++	struct irq_chip *chip = irq_data_get_irq_chip(d);
+ 	int rc;
+ 	u32 target;
+ 	u8 prio;
+ 	u32 lirq;
+ 
++	if (!is_xive_irq(chip))
++		return -EINVAL;
++
+ 	rc = xive_ops->get_irq_config(hw_irq, &target, &prio, &lirq);
+ 	if (rc) {
+ 		xmon_printf("IRQ 0x%08x : no config rc=%d\n", hw_irq, rc);
+@@ -1150,7 +1147,7 @@ static int xive_setup_cpu_ipi(unsigned int cpu)
+ 	xc = per_cpu(xive_cpu, cpu);
+ 
+ 	/* Check if we are already setup */
+-	if (xc->hw_ipi != 0)
++	if (xc->hw_ipi != XIVE_BAD_IRQ)
+ 		return 0;
+ 
+ 	/* Grab an IPI from the backend, this will populate xc->hw_ipi */
+@@ -1187,7 +1184,7 @@ static void xive_cleanup_cpu_ipi(unsigned int cpu, struct xive_cpu *xc)
+ 	/* Disable the IPI and free the IRQ data */
+ 
+ 	/* Already cleaned up ? */
+-	if (xc->hw_ipi == 0)
++	if (xc->hw_ipi == XIVE_BAD_IRQ)
+ 		return;
+ 
+ 	/* Mask the IPI */
+@@ -1343,6 +1340,7 @@ static int xive_prepare_cpu(unsigned int cpu)
+ 		if (np)
+ 			xc->chip_id = of_get_ibm_chip_id(np);
+ 		of_node_put(np);
++		xc->hw_ipi = XIVE_BAD_IRQ;
+ 
+ 		per_cpu(xive_cpu, cpu) = xc;
+ 	}
+diff --git a/arch/powerpc/sysdev/xive/native.c b/arch/powerpc/sysdev/xive/native.c
+index 0ff6b739052c..50e1a8e02497 100644
+--- a/arch/powerpc/sysdev/xive/native.c
++++ b/arch/powerpc/sysdev/xive/native.c
+@@ -312,7 +312,7 @@ static void xive_native_put_ipi(unsigned int cpu, struct xive_cpu *xc)
+ 	s64 rc;
+ 
+ 	/* Free the IPI */
+-	if (!xc->hw_ipi)
++	if (xc->hw_ipi == XIVE_BAD_IRQ)
+ 		return;
+ 	for (;;) {
+ 		rc = opal_xive_free_irq(xc->hw_ipi);
+@@ -320,7 +320,7 @@ static void xive_native_put_ipi(unsigned int cpu, struct xive_cpu *xc)
+ 			msleep(OPAL_BUSY_DELAY_MS);
+ 			continue;
+ 		}
+-		xc->hw_ipi = 0;
++		xc->hw_ipi = XIVE_BAD_IRQ;
+ 		break;
+ 	}
+ }
+diff --git a/arch/powerpc/sysdev/xive/spapr.c b/arch/powerpc/sysdev/xive/spapr.c
+index 55dc61cb4867..3f15615712b5 100644
+--- a/arch/powerpc/sysdev/xive/spapr.c
++++ b/arch/powerpc/sysdev/xive/spapr.c
+@@ -560,11 +560,11 @@ static int xive_spapr_get_ipi(unsigned int cpu, struct xive_cpu *xc)
+ 
+ static void xive_spapr_put_ipi(unsigned int cpu, struct xive_cpu *xc)
+ {
+-	if (!xc->hw_ipi)
++	if (xc->hw_ipi == XIVE_BAD_IRQ)
+ 		return;
+ 
+ 	xive_irq_bitmap_free(xc->hw_ipi);
+-	xc->hw_ipi = 0;
++	xc->hw_ipi = XIVE_BAD_IRQ;
+ }
+ #endif /* CONFIG_SMP */
+ 
+diff --git a/arch/powerpc/sysdev/xive/xive-internal.h b/arch/powerpc/sysdev/xive/xive-internal.h
+index 59cd366e7933..382980f4de2d 100644
+--- a/arch/powerpc/sysdev/xive/xive-internal.h
++++ b/arch/powerpc/sysdev/xive/xive-internal.h
+@@ -5,6 +5,13 @@
+ #ifndef __XIVE_INTERNAL_H
+ #define __XIVE_INTERNAL_H
+ 
++/*
++ * A "disabled" interrupt should never fire, to catch problems
++ * we set its logical number to this
++ */
++#define XIVE_BAD_IRQ		0x7fffffff
++#define XIVE_MAX_IRQ		(XIVE_BAD_IRQ - 1)
++
+ /* Each CPU carry one of these with various per-CPU state */
+ struct xive_cpu {
+ #ifdef CONFIG_SMP
+diff --git a/arch/powerpc/xmon/Makefile b/arch/powerpc/xmon/Makefile
+index c3842dbeb1b7..6f9cccea54f3 100644
+--- a/arch/powerpc/xmon/Makefile
++++ b/arch/powerpc/xmon/Makefile
+@@ -1,9 +1,6 @@
+ # SPDX-License-Identifier: GPL-2.0
+ # Makefile for xmon
+ 
+-# Avoid clang warnings around longjmp/setjmp declarations
+-subdir-ccflags-y := -ffreestanding
+-
+ GCOV_PROFILE := n
+ KCOV_INSTRUMENT := n
+ UBSAN_SANITIZE := n
+diff --git a/arch/s390/kernel/diag.c b/arch/s390/kernel/diag.c
+index e9dac9a24d3f..61f2b0412345 100644
+--- a/arch/s390/kernel/diag.c
++++ b/arch/s390/kernel/diag.c
+@@ -84,7 +84,7 @@ static int show_diag_stat(struct seq_file *m, void *v)
+ 
+ static void *show_diag_stat_start(struct seq_file *m, loff_t *pos)
+ {
+-	return *pos <= nr_cpu_ids ? (void *)((unsigned long) *pos + 1) : NULL;
++	return *pos <= NR_DIAG_STAT ? (void *)((unsigned long) *pos + 1) : NULL;
+ }
+ 
+ static void *show_diag_stat_next(struct seq_file *m, void *v, loff_t *pos)
+diff --git a/arch/s390/kvm/vsie.c b/arch/s390/kvm/vsie.c
+index 076090f9e666..4f6c22d72072 100644
+--- a/arch/s390/kvm/vsie.c
++++ b/arch/s390/kvm/vsie.c
+@@ -1202,6 +1202,7 @@ static int vsie_run(struct kvm_vcpu *vcpu, struct vsie_page *vsie_page)
+ 		scb_s->iprcc = PGM_ADDRESSING;
+ 		scb_s->pgmilc = 4;
+ 		scb_s->gpsw.addr = __rewind_psw(scb_s->gpsw, 4);
++		rc = 1;
+ 	}
+ 	return rc;
+ }
+diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
+index edcdca97e85e..9d9ab77d02dd 100644
+--- a/arch/s390/mm/gmap.c
++++ b/arch/s390/mm/gmap.c
+@@ -787,14 +787,18 @@ static void gmap_call_notifier(struct gmap *gmap, unsigned long start,
+ static inline unsigned long *gmap_table_walk(struct gmap *gmap,
+ 					     unsigned long gaddr, int level)
+ {
++	const int asce_type = gmap->asce & _ASCE_TYPE_MASK;
+ 	unsigned long *table;
+ 
+ 	if ((gmap->asce & _ASCE_TYPE_MASK) + 4 < (level * 4))
+ 		return NULL;
+ 	if (gmap_is_shadow(gmap) && gmap->removed)
+ 		return NULL;
+-	if (gaddr & (-1UL << (31 + ((gmap->asce & _ASCE_TYPE_MASK) >> 2)*11)))
++
++	if (asce_type != _ASCE_TYPE_REGION1 &&
++	    gaddr & (-1UL << (31 + (asce_type >> 2) * 11)))
+ 		return NULL;
++
+ 	table = gmap->table;
+ 	switch (gmap->asce & _ASCE_TYPE_MASK) {
+ 	case _ASCE_TYPE_REGION1:
+diff --git a/arch/x86/boot/compressed/head_32.S b/arch/x86/boot/compressed/head_32.S
+index 73f17d0544dd..7f7e8b8518fe 100644
+--- a/arch/x86/boot/compressed/head_32.S
++++ b/arch/x86/boot/compressed/head_32.S
+@@ -106,7 +106,7 @@ SYM_FUNC_START(startup_32)
+ 	notl	%eax
+ 	andl    %eax, %ebx
+ 	cmpl	$LOAD_PHYSICAL_ADDR, %ebx
+-	jge	1f
++	jae	1f
+ #endif
+ 	movl	$LOAD_PHYSICAL_ADDR, %ebx
+ 1:
+diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
+index 1f1f6c8139b3..afde2aa8382e 100644
+--- a/arch/x86/boot/compressed/head_64.S
++++ b/arch/x86/boot/compressed/head_64.S
+@@ -106,7 +106,7 @@ SYM_FUNC_START(startup_32)
+ 	notl	%eax
+ 	andl	%eax, %ebx
+ 	cmpl	$LOAD_PHYSICAL_ADDR, %ebx
+-	jge	1f
++	jae	1f
+ #endif
+ 	movl	$LOAD_PHYSICAL_ADDR, %ebx
+ 1:
+@@ -296,7 +296,7 @@ SYM_CODE_START(startup_64)
+ 	notq	%rax
+ 	andq	%rax, %rbp
+ 	cmpq	$LOAD_PHYSICAL_ADDR, %rbp
+-	jge	1f
++	jae	1f
+ #endif
+ 	movq	$LOAD_PHYSICAL_ADDR, %rbp
+ 1:
+diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S
+index 7e0560442538..39243df98100 100644
+--- a/arch/x86/entry/entry_32.S
++++ b/arch/x86/entry/entry_32.S
+@@ -1694,6 +1694,7 @@ SYM_CODE_START(int3)
+ SYM_CODE_END(int3)
+ 
+ SYM_CODE_START(general_protection)
++	ASM_CLAC
+ 	pushl	$do_general_protection
+ 	jmp	common_exception
+ SYM_CODE_END(general_protection)
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index 98959e8cd448..d79b40cd8283 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -1180,7 +1180,7 @@ struct kvm_x86_ops {
+ 	bool (*pt_supported)(void);
+ 	bool (*pku_supported)(void);
+ 
+-	int (*check_nested_events)(struct kvm_vcpu *vcpu, bool external_intr);
++	int (*check_nested_events)(struct kvm_vcpu *vcpu);
+ 	void (*request_immediate_exit)(struct kvm_vcpu *vcpu);
+ 
+ 	void (*sched_in)(struct kvm_vcpu *kvm, int cpu);
+diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
+index 7e118660bbd9..64a03f226ab7 100644
+--- a/arch/x86/include/asm/pgtable.h
++++ b/arch/x86/include/asm/pgtable.h
+@@ -627,12 +627,15 @@ static inline pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot)
+ 	return __pmd(val);
+ }
+ 
+-/* mprotect needs to preserve PAT bits when updating vm_page_prot */
++/*
++ * mprotect needs to preserve PAT and encryption bits when updating
++ * vm_page_prot
++ */
+ #define pgprot_modify pgprot_modify
+ static inline pgprot_t pgprot_modify(pgprot_t oldprot, pgprot_t newprot)
+ {
+ 	pgprotval_t preservebits = pgprot_val(oldprot) & _PAGE_CHG_MASK;
+-	pgprotval_t addbits = pgprot_val(newprot);
++	pgprotval_t addbits = pgprot_val(newprot) & ~_PAGE_CHG_MASK;
+ 	return __pgprot(preservebits | addbits);
+ }
+ 
+diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
+index 0239998d8cdc..65c2ecd730c5 100644
+--- a/arch/x86/include/asm/pgtable_types.h
++++ b/arch/x86/include/asm/pgtable_types.h
+@@ -118,7 +118,7 @@
+  */
+ #define _PAGE_CHG_MASK	(PTE_PFN_MASK | _PAGE_PCD | _PAGE_PWT |		\
+ 			 _PAGE_SPECIAL | _PAGE_ACCESSED | _PAGE_DIRTY |	\
+-			 _PAGE_SOFT_DIRTY | _PAGE_DEVMAP)
++			 _PAGE_SOFT_DIRTY | _PAGE_DEVMAP | _PAGE_ENC)
+ #define _HPAGE_CHG_MASK (_PAGE_CHG_MASK | _PAGE_PSE)
+ 
+ /*
+diff --git a/arch/x86/kernel/acpi/boot.c b/arch/x86/kernel/acpi/boot.c
+index 04205ce127a1..f9e84a0e2fa2 100644
+--- a/arch/x86/kernel/acpi/boot.c
++++ b/arch/x86/kernel/acpi/boot.c
+@@ -1740,7 +1740,7 @@ int __acpi_acquire_global_lock(unsigned int *lock)
+ 		new = (((old & ~0x3) + 2) + ((old >> 1) & 0x1));
+ 		val = cmpxchg(lock, old, new);
+ 	} while (unlikely (val != old));
+-	return (new < 3) ? -1 : 0;
++	return ((new & 0x3) < 3) ? -1 : 0;
+ }
+ 
+ int __acpi_release_global_lock(unsigned int *lock)
+diff --git a/arch/x86/kernel/tsc_msr.c b/arch/x86/kernel/tsc_msr.c
+index e0cbe4f2af49..c65adaf81384 100644
+--- a/arch/x86/kernel/tsc_msr.c
++++ b/arch/x86/kernel/tsc_msr.c
+@@ -15,18 +15,46 @@
+ #include <asm/param.h>
+ #include <asm/tsc.h>
+ 
+-#define MAX_NUM_FREQS	9
++#define MAX_NUM_FREQS	16 /* 4 bits to select the frequency */
++
++/*
++ * The frequency numbers in the SDM are e.g. 83.3 MHz, which does not contain a
++ * lot of accuracy which leads to clock drift. As far as we know Bay Trail SoCs
++ * use a 25 MHz crystal and Cherry Trail uses a 19.2 MHz crystal, the crystal
++ * is the source clk for a root PLL which outputs 1600 and 100 MHz. It is
++ * unclear if the root PLL outputs are used directly by the CPU clock PLL or
++ * if there is another PLL in between.
++ * This does not matter though, we can model the chain of PLLs as a single PLL
++ * with a quotient equal to the quotients of all PLLs in the chain multiplied.
++ * So we can create a simplified model of the CPU clock setup using a reference
++ * clock of 100 MHz plus a quotient which gets us as close to the frequency
++ * from the SDM as possible.
++ * For the 83.3 MHz example from above this would give us 100 MHz * 5 / 6 =
++ * 83 and 1/3 MHz, which matches exactly what has been measured on actual hw.
++ */
++#define TSC_REFERENCE_KHZ 100000
++
++struct muldiv {
++	u32 multiplier;
++	u32 divider;
++};
+ 
+ /*
+  * If MSR_PERF_STAT[31] is set, the maximum resolved bus ratio can be
+  * read in MSR_PLATFORM_ID[12:8], otherwise in MSR_PERF_STAT[44:40].
+  * Unfortunately some Intel Atom SoCs aren't quite compliant to this,
+  * so we need manually differentiate SoC families. This is what the
+- * field msr_plat does.
++ * field use_msr_plat does.
+  */
+ struct freq_desc {
+-	u8 msr_plat;	/* 1: use MSR_PLATFORM_INFO, 0: MSR_IA32_PERF_STATUS */
++	bool use_msr_plat;
++	struct muldiv muldiv[MAX_NUM_FREQS];
++	/*
++	 * Some CPU frequencies in the SDM do not map to known PLL freqs, in
++	 * that case the muldiv array is empty and the freqs array is used.
++	 */
+ 	u32 freqs[MAX_NUM_FREQS];
++	u32 mask;
+ };
+ 
+ /*
+@@ -35,31 +63,81 @@ struct freq_desc {
+  * by MSR based on SDM.
+  */
+ static const struct freq_desc freq_desc_pnw = {
+-	0, { 0, 0, 0, 0, 0, 99840, 0, 83200 }
++	.use_msr_plat = false,
++	.freqs = { 0, 0, 0, 0, 0, 99840, 0, 83200 },
++	.mask = 0x07,
+ };
+ 
+ static const struct freq_desc freq_desc_clv = {
+-	0, { 0, 133200, 0, 0, 0, 99840, 0, 83200 }
++	.use_msr_plat = false,
++	.freqs = { 0, 133200, 0, 0, 0, 99840, 0, 83200 },
++	.mask = 0x07,
+ };
+ 
++/*
++ * Bay Trail SDM MSR_FSB_FREQ frequencies simplified PLL model:
++ *  000:   100 *  5 /  6  =  83.3333 MHz
++ *  001:   100 *  1 /  1  = 100.0000 MHz
++ *  010:   100 *  4 /  3  = 133.3333 MHz
++ *  011:   100 *  7 /  6  = 116.6667 MHz
++ *  100:   100 *  4 /  5  =  80.0000 MHz
++ */
+ static const struct freq_desc freq_desc_byt = {
+-	1, { 83300, 100000, 133300, 116700, 80000, 0, 0, 0 }
++	.use_msr_plat = true,
++	.muldiv = { { 5, 6 }, { 1, 1 }, { 4, 3 }, { 7, 6 },
++		    { 4, 5 } },
++	.mask = 0x07,
+ };
+ 
++/*
++ * Cherry Trail SDM MSR_FSB_FREQ frequencies simplified PLL model:
++ * 0000:   100 *  5 /  6  =  83.3333 MHz
++ * 0001:   100 *  1 /  1  = 100.0000 MHz
++ * 0010:   100 *  4 /  3  = 133.3333 MHz
++ * 0011:   100 *  7 /  6  = 116.6667 MHz
++ * 0100:   100 *  4 /  5  =  80.0000 MHz
++ * 0101:   100 * 14 / 15  =  93.3333 MHz
++ * 0110:   100 *  9 / 10  =  90.0000 MHz
++ * 0111:   100 *  8 /  9  =  88.8889 MHz
++ * 1000:   100 *  7 /  8  =  87.5000 MHz
++ */
+ static const struct freq_desc freq_desc_cht = {
+-	1, { 83300, 100000, 133300, 116700, 80000, 93300, 90000, 88900, 87500 }
++	.use_msr_plat = true,
++	.muldiv = { { 5, 6 }, {  1,  1 }, { 4,  3 }, { 7, 6 },
++		    { 4, 5 }, { 14, 15 }, { 9, 10 }, { 8, 9 },
++		    { 7, 8 } },
++	.mask = 0x0f,
+ };
+ 
++/*
++ * Merriefield SDM MSR_FSB_FREQ frequencies simplified PLL model:
++ * 0001:   100 *  1 /  1  = 100.0000 MHz
++ * 0010:   100 *  4 /  3  = 133.3333 MHz
++ */
+ static const struct freq_desc freq_desc_tng = {
+-	1, { 0, 100000, 133300, 0, 0, 0, 0, 0 }
++	.use_msr_plat = true,
++	.muldiv = { { 0, 0 }, { 1, 1 }, { 4, 3 } },
++	.mask = 0x07,
+ };
+ 
++/*
++ * Moorefield SDM MSR_FSB_FREQ frequencies simplified PLL model:
++ * 0000:   100 *  5 /  6  =  83.3333 MHz
++ * 0001:   100 *  1 /  1  = 100.0000 MHz
++ * 0010:   100 *  4 /  3  = 133.3333 MHz
++ * 0011:   100 *  1 /  1  = 100.0000 MHz
++ */
+ static const struct freq_desc freq_desc_ann = {
+-	1, { 83300, 100000, 133300, 100000, 0, 0, 0, 0 }
++	.use_msr_plat = true,
++	.muldiv = { { 5, 6 }, { 1, 1 }, { 4, 3 }, { 1, 1 } },
++	.mask = 0x0f,
+ };
+ 
++/* 24 MHz crystal? : 24 * 13 / 4 = 78 MHz */
+ static const struct freq_desc freq_desc_lgm = {
+-	1, { 78000, 78000, 78000, 78000, 78000, 78000, 78000, 78000 }
++	.use_msr_plat = true,
++	.freqs = { 78000, 78000, 78000, 78000, 78000, 78000, 78000, 78000 },
++	.mask = 0x0f,
+ };
+ 
+ static const struct x86_cpu_id tsc_msr_cpu_ids[] = {
+@@ -81,17 +159,19 @@ static const struct x86_cpu_id tsc_msr_cpu_ids[] = {
+  */
+ unsigned long cpu_khz_from_msr(void)
+ {
+-	u32 lo, hi, ratio, freq;
++	u32 lo, hi, ratio, freq, tscref;
+ 	const struct freq_desc *freq_desc;
+ 	const struct x86_cpu_id *id;
++	const struct muldiv *md;
+ 	unsigned long res;
++	int index;
+ 
+ 	id = x86_match_cpu(tsc_msr_cpu_ids);
+ 	if (!id)
+ 		return 0;
+ 
+ 	freq_desc = (struct freq_desc *)id->driver_data;
+-	if (freq_desc->msr_plat) {
++	if (freq_desc->use_msr_plat) {
+ 		rdmsr(MSR_PLATFORM_INFO, lo, hi);
+ 		ratio = (lo >> 8) & 0xff;
+ 	} else {
+@@ -101,12 +181,28 @@ unsigned long cpu_khz_from_msr(void)
+ 
+ 	/* Get FSB FREQ ID */
+ 	rdmsr(MSR_FSB_FREQ, lo, hi);
++	index = lo & freq_desc->mask;
++	md = &freq_desc->muldiv[index];
+ 
+-	/* Map CPU reference clock freq ID(0-7) to CPU reference clock freq(KHz) */
+-	freq = freq_desc->freqs[lo & 0x7];
++	/*
++	 * Note this also catches cases where the index points to an unpopulated
++	 * part of muldiv, in that case the else will set freq and res to 0.
++	 */
++	if (md->divider) {
++		tscref = TSC_REFERENCE_KHZ * md->multiplier;
++		freq = DIV_ROUND_CLOSEST(tscref, md->divider);
++		/*
++		 * Multiplying by ratio before the division has better
++		 * accuracy than just calculating freq * ratio.
++		 */
++		res = DIV_ROUND_CLOSEST(tscref * ratio, md->divider);
++	} else {
++		freq = freq_desc->freqs[index];
++		res = freq * ratio;
++	}
+ 
+-	/* TSC frequency = maximum resolved freq * maximum resolved bus ratio */
+-	res = freq * ratio;
++	if (freq == 0)
++		pr_err("Error MSR_FSB_FREQ index %d is unknown\n", index);
+ 
+ #ifdef CONFIG_X86_LOCAL_APIC
+ 	lapic_timer_period = (freq * 1000) / HZ;
+diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
+index 50d1ebafe0b3..451377533bcb 100644
+--- a/arch/x86/kvm/svm.c
++++ b/arch/x86/kvm/svm.c
+@@ -1943,6 +1943,10 @@ static struct kvm *svm_vm_alloc(void)
+ 	struct kvm_svm *kvm_svm = __vmalloc(sizeof(struct kvm_svm),
+ 					    GFP_KERNEL_ACCOUNT | __GFP_ZERO,
+ 					    PAGE_KERNEL);
++
++	if (!kvm_svm)
++		return NULL;
++
+ 	return &kvm_svm->kvm;
+ }
+ 
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index 9750e590c89d..eec7b2d93104 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -3604,7 +3604,7 @@ static void nested_vmx_update_pending_dbg(struct kvm_vcpu *vcpu)
+ 			    vcpu->arch.exception.payload);
+ }
+ 
+-static int vmx_check_nested_events(struct kvm_vcpu *vcpu, bool external_intr)
++static int vmx_check_nested_events(struct kvm_vcpu *vcpu)
+ {
+ 	struct vcpu_vmx *vmx = to_vmx(vcpu);
+ 	unsigned long exit_qual;
+@@ -3680,8 +3680,7 @@ static int vmx_check_nested_events(struct kvm_vcpu *vcpu, bool external_intr)
+ 		return 0;
+ 	}
+ 
+-	if ((kvm_cpu_has_interrupt(vcpu) || external_intr) &&
+-	    nested_exit_on_intr(vcpu)) {
++	if (kvm_cpu_has_interrupt(vcpu) && nested_exit_on_intr(vcpu)) {
+ 		if (block_nested_events)
+ 			return -EBUSY;
+ 		nested_vmx_vmexit(vcpu, EXIT_REASON_EXTERNAL_INTERRUPT, 0, 0);
+@@ -4329,17 +4328,8 @@ void nested_vmx_vmexit(struct kvm_vcpu *vcpu, u32 exit_reason,
+ 	vcpu->arch.mp_state = KVM_MP_STATE_RUNNABLE;
+ 
+ 	if (likely(!vmx->fail)) {
+-		/*
+-		 * TODO: SDM says that with acknowledge interrupt on
+-		 * exit, bit 31 of the VM-exit interrupt information
+-		 * (valid interrupt) is always set to 1 on
+-		 * EXIT_REASON_EXTERNAL_INTERRUPT, so we shouldn't
+-		 * need kvm_cpu_has_interrupt().  See the commit
+-		 * message for details.
+-		 */
+-		if (nested_exit_intr_ack_set(vcpu) &&
+-		    exit_reason == EXIT_REASON_EXTERNAL_INTERRUPT &&
+-		    kvm_cpu_has_interrupt(vcpu)) {
++		if (exit_reason == EXIT_REASON_EXTERNAL_INTERRUPT &&
++		    nested_exit_intr_ack_set(vcpu)) {
+ 			int irq = kvm_cpu_get_interrupt(vcpu);
+ 			WARN_ON(irq < 0);
+ 			vmcs12->vm_exit_intr_info = irq |
+diff --git a/arch/x86/kvm/vmx/ops.h b/arch/x86/kvm/vmx/ops.h
+index 45eaedee2ac0..09b0937d56b1 100644
+--- a/arch/x86/kvm/vmx/ops.h
++++ b/arch/x86/kvm/vmx/ops.h
+@@ -12,7 +12,8 @@
+ 
+ #define __ex(x) __kvm_handle_fault_on_reboot(x)
+ 
+-asmlinkage void vmread_error(unsigned long field, bool fault);
++__attribute__((regparm(0))) void vmread_error_trampoline(unsigned long field,
++							 bool fault);
+ void vmwrite_error(unsigned long field, unsigned long value);
+ void vmclear_error(struct vmcs *vmcs, u64 phys_addr);
+ void vmptrld_error(struct vmcs *vmcs, u64 phys_addr);
+@@ -70,15 +71,28 @@ static __always_inline unsigned long __vmcs_readl(unsigned long field)
+ 	asm volatile("1: vmread %2, %1\n\t"
+ 		     ".byte 0x3e\n\t" /* branch taken hint */
+ 		     "ja 3f\n\t"
+-		     "mov %2, %%" _ASM_ARG1 "\n\t"
+-		     "xor %%" _ASM_ARG2 ", %%" _ASM_ARG2 "\n\t"
+-		     "2: call vmread_error\n\t"
+-		     "xor %k1, %k1\n\t"
++
++		     /*
++		      * VMREAD failed.  Push '0' for @fault, push the failing
++		      * @field, and bounce through the trampoline to preserve
++		      * volatile registers.
++		      */
++		     "push $0\n\t"
++		     "push %2\n\t"
++		     "2:call vmread_error_trampoline\n\t"
++
++		     /*
++		      * Unwind the stack.  Note, the trampoline zeros out the
++		      * memory for @fault so that the result is '0' on error.
++		      */
++		     "pop %2\n\t"
++		     "pop %1\n\t"
+ 		     "3:\n\t"
+ 
++		     /* VMREAD faulted.  As above, except push '1' for @fault. */
+ 		     ".pushsection .fixup, \"ax\"\n\t"
+-		     "4: mov %2, %%" _ASM_ARG1 "\n\t"
+-		     "mov $1, %%" _ASM_ARG2 "\n\t"
++		     "4: push $1\n\t"
++		     "push %2\n\t"
+ 		     "jmp 2b\n\t"
+ 		     ".popsection\n\t"
+ 		     _ASM_EXTABLE(1b, 4b)
+diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S
+index 81ada2ce99e7..861ae40e7144 100644
+--- a/arch/x86/kvm/vmx/vmenter.S
++++ b/arch/x86/kvm/vmx/vmenter.S
+@@ -234,3 +234,61 @@ SYM_FUNC_START(__vmx_vcpu_run)
+ 2:	mov $1, %eax
+ 	jmp 1b
+ SYM_FUNC_END(__vmx_vcpu_run)
++
++/**
++ * vmread_error_trampoline - Trampoline from inline asm to vmread_error()
++ * @field:	VMCS field encoding that failed
++ * @fault:	%true if the VMREAD faulted, %false if it failed
++
++ * Save and restore volatile registers across a call to vmread_error().  Note,
++ * all parameters are passed on the stack.
++ */
++SYM_FUNC_START(vmread_error_trampoline)
++	push %_ASM_BP
++	mov  %_ASM_SP, %_ASM_BP
++
++	push %_ASM_AX
++	push %_ASM_CX
++	push %_ASM_DX
++#ifdef CONFIG_X86_64
++	push %rdi
++	push %rsi
++	push %r8
++	push %r9
++	push %r10
++	push %r11
++#endif
++#ifdef CONFIG_X86_64
++	/* Load @field and @fault to arg1 and arg2 respectively. */
++	mov 3*WORD_SIZE(%rbp), %_ASM_ARG2
++	mov 2*WORD_SIZE(%rbp), %_ASM_ARG1
++#else
++	/* Parameters are passed on the stack for 32-bit (see asmlinkage). */
++	push 3*WORD_SIZE(%ebp)
++	push 2*WORD_SIZE(%ebp)
++#endif
++
++	call vmread_error
++
++#ifndef CONFIG_X86_64
++	add $8, %esp
++#endif
++
++	/* Zero out @fault, which will be popped into the result register. */
++	_ASM_MOV $0, 3*WORD_SIZE(%_ASM_BP)
++
++#ifdef CONFIG_X86_64
++	pop %r11
++	pop %r10
++	pop %r9
++	pop %r8
++	pop %rsi
++	pop %rdi
++#endif
++	pop %_ASM_DX
++	pop %_ASM_CX
++	pop %_ASM_AX
++	pop %_ASM_BP
++
++	ret
++SYM_FUNC_END(vmread_error_trampoline)
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index 079d9fbf278e..0a7867897507 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -666,43 +666,15 @@ void loaded_vmcs_init(struct loaded_vmcs *loaded_vmcs)
+ }
+ 
+ #ifdef CONFIG_KEXEC_CORE
+-/*
+- * This bitmap is used to indicate whether the vmclear
+- * operation is enabled on all cpus. All disabled by
+- * default.
+- */
+-static cpumask_t crash_vmclear_enabled_bitmap = CPU_MASK_NONE;
+-
+-static inline void crash_enable_local_vmclear(int cpu)
+-{
+-	cpumask_set_cpu(cpu, &crash_vmclear_enabled_bitmap);
+-}
+-
+-static inline void crash_disable_local_vmclear(int cpu)
+-{
+-	cpumask_clear_cpu(cpu, &crash_vmclear_enabled_bitmap);
+-}
+-
+-static inline int crash_local_vmclear_enabled(int cpu)
+-{
+-	return cpumask_test_cpu(cpu, &crash_vmclear_enabled_bitmap);
+-}
+-
+ static void crash_vmclear_local_loaded_vmcss(void)
+ {
+ 	int cpu = raw_smp_processor_id();
+ 	struct loaded_vmcs *v;
+ 
+-	if (!crash_local_vmclear_enabled(cpu))
+-		return;
+-
+ 	list_for_each_entry(v, &per_cpu(loaded_vmcss_on_cpu, cpu),
+ 			    loaded_vmcss_on_cpu_link)
+ 		vmcs_clear(v->vmcs);
+ }
+-#else
+-static inline void crash_enable_local_vmclear(int cpu) { }
+-static inline void crash_disable_local_vmclear(int cpu) { }
+ #endif /* CONFIG_KEXEC_CORE */
+ 
+ static void __loaded_vmcs_clear(void *arg)
+@@ -714,19 +686,24 @@ static void __loaded_vmcs_clear(void *arg)
+ 		return; /* vcpu migration can race with cpu offline */
+ 	if (per_cpu(current_vmcs, cpu) == loaded_vmcs->vmcs)
+ 		per_cpu(current_vmcs, cpu) = NULL;
+-	crash_disable_local_vmclear(cpu);
++
++	vmcs_clear(loaded_vmcs->vmcs);
++	if (loaded_vmcs->shadow_vmcs && loaded_vmcs->launched)
++		vmcs_clear(loaded_vmcs->shadow_vmcs);
++
+ 	list_del(&loaded_vmcs->loaded_vmcss_on_cpu_link);
+ 
+ 	/*
+-	 * we should ensure updating loaded_vmcs->loaded_vmcss_on_cpu_link
+-	 * is before setting loaded_vmcs->vcpu to -1 which is done in
+-	 * loaded_vmcs_init. Otherwise, other cpu can see vcpu = -1 fist
+-	 * then adds the vmcs into percpu list before it is deleted.
++	 * Ensure all writes to loaded_vmcs, including deleting it from its
++	 * current percpu list, complete before setting loaded_vmcs->vcpu to
++	 * -1, otherwise a different cpu can see vcpu == -1 first and add
++	 * loaded_vmcs to its percpu list before it's deleted from this cpu's
++	 * list. Pairs with the smp_rmb() in vmx_vcpu_load_vmcs().
+ 	 */
+ 	smp_wmb();
+ 
+-	loaded_vmcs_init(loaded_vmcs);
+-	crash_enable_local_vmclear(cpu);
++	loaded_vmcs->cpu = -1;
++	loaded_vmcs->launched = 0;
+ }
+ 
+ void loaded_vmcs_clear(struct loaded_vmcs *loaded_vmcs)
+@@ -1345,18 +1322,17 @@ void vmx_vcpu_load_vmcs(struct kvm_vcpu *vcpu, int cpu)
+ 	if (!already_loaded) {
+ 		loaded_vmcs_clear(vmx->loaded_vmcs);
+ 		local_irq_disable();
+-		crash_disable_local_vmclear(cpu);
+ 
+ 		/*
+-		 * Read loaded_vmcs->cpu should be before fetching
+-		 * loaded_vmcs->loaded_vmcss_on_cpu_link.
+-		 * See the comments in __loaded_vmcs_clear().
++		 * Ensure loaded_vmcs->cpu is read before adding loaded_vmcs to
++		 * this cpu's percpu list, otherwise it may not yet be deleted
++		 * from its previous cpu's percpu list.  Pairs with the
++		 * smb_wmb() in __loaded_vmcs_clear().
+ 		 */
+ 		smp_rmb();
+ 
+ 		list_add(&vmx->loaded_vmcs->loaded_vmcss_on_cpu_link,
+ 			 &per_cpu(loaded_vmcss_on_cpu, cpu));
+-		crash_enable_local_vmclear(cpu);
+ 		local_irq_enable();
+ 	}
+ 
+@@ -2288,21 +2264,6 @@ static int hardware_enable(void)
+ 	    !hv_get_vp_assist_page(cpu))
+ 		return -EFAULT;
+ 
+-	INIT_LIST_HEAD(&per_cpu(loaded_vmcss_on_cpu, cpu));
+-	INIT_LIST_HEAD(&per_cpu(blocked_vcpu_on_cpu, cpu));
+-	spin_lock_init(&per_cpu(blocked_vcpu_on_cpu_lock, cpu));
+-
+-	/*
+-	 * Now we can enable the vmclear operation in kdump
+-	 * since the loaded_vmcss_on_cpu list on this cpu
+-	 * has been initialized.
+-	 *
+-	 * Though the cpu is not in VMX operation now, there
+-	 * is no problem to enable the vmclear operation
+-	 * for the loaded_vmcss_on_cpu list is empty!
+-	 */
+-	crash_enable_local_vmclear(cpu);
+-
+ 	kvm_cpu_vmxon(phys_addr);
+ 	if (enable_ept)
+ 		ept_sync_global();
+@@ -4507,8 +4468,13 @@ static int vmx_nmi_allowed(struct kvm_vcpu *vcpu)
+ 
+ static int vmx_interrupt_allowed(struct kvm_vcpu *vcpu)
+ {
+-	return (!to_vmx(vcpu)->nested.nested_run_pending &&
+-		vmcs_readl(GUEST_RFLAGS) & X86_EFLAGS_IF) &&
++	if (to_vmx(vcpu)->nested.nested_run_pending)
++		return false;
++
++	if (is_guest_mode(vcpu) && nested_exit_on_intr(vcpu))
++		return true;
++
++	return (vmcs_readl(GUEST_RFLAGS) & X86_EFLAGS_IF) &&
+ 		!(vmcs_read32(GUEST_INTERRUPTIBILITY_INFO) &
+ 			(GUEST_INTR_STATE_STI | GUEST_INTR_STATE_MOV_SS));
+ }
+@@ -6701,6 +6667,10 @@ static struct kvm *vmx_vm_alloc(void)
+ 	struct kvm_vmx *kvm_vmx = __vmalloc(sizeof(struct kvm_vmx),
+ 					    GFP_KERNEL_ACCOUNT | __GFP_ZERO,
+ 					    PAGE_KERNEL);
++
++	if (!kvm_vmx)
++		return NULL;
++
+ 	return &kvm_vmx->kvm;
+ }
+ 
+@@ -8051,7 +8021,7 @@ module_exit(vmx_exit);
+ 
+ static int __init vmx_init(void)
+ {
+-	int r;
++	int r, cpu;
+ 
+ #if IS_ENABLED(CONFIG_HYPERV)
+ 	/*
+@@ -8105,6 +8075,12 @@ static int __init vmx_init(void)
+ 		return r;
+ 	}
+ 
++	for_each_possible_cpu(cpu) {
++		INIT_LIST_HEAD(&per_cpu(loaded_vmcss_on_cpu, cpu));
++		INIT_LIST_HEAD(&per_cpu(blocked_vcpu_on_cpu, cpu));
++		spin_lock_init(&per_cpu(blocked_vcpu_on_cpu_lock, cpu));
++	}
++
+ #ifdef CONFIG_KEXEC_CORE
+ 	rcu_assign_pointer(crash_vmclear_loaded_vmcss,
+ 			   crash_vmclear_local_loaded_vmcss);
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index cf95c36cb4f4..17650bda4331 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -7635,7 +7635,7 @@ static void update_cr8_intercept(struct kvm_vcpu *vcpu)
+ 	kvm_x86_ops->update_cr8_intercept(vcpu, tpr, max_irr);
+ }
+ 
+-static int inject_pending_event(struct kvm_vcpu *vcpu, bool req_int_win)
++static int inject_pending_event(struct kvm_vcpu *vcpu)
+ {
+ 	int r;
+ 
+@@ -7671,7 +7671,7 @@ static int inject_pending_event(struct kvm_vcpu *vcpu, bool req_int_win)
+ 	 * from L2 to L1.
+ 	 */
+ 	if (is_guest_mode(vcpu) && kvm_x86_ops->check_nested_events) {
+-		r = kvm_x86_ops->check_nested_events(vcpu, req_int_win);
++		r = kvm_x86_ops->check_nested_events(vcpu);
+ 		if (r != 0)
+ 			return r;
+ 	}
+@@ -7733,7 +7733,7 @@ static int inject_pending_event(struct kvm_vcpu *vcpu, bool req_int_win)
+ 		 * KVM_REQ_EVENT only on certain events and not unconditionally?
+ 		 */
+ 		if (is_guest_mode(vcpu) && kvm_x86_ops->check_nested_events) {
+-			r = kvm_x86_ops->check_nested_events(vcpu, req_int_win);
++			r = kvm_x86_ops->check_nested_events(vcpu);
+ 			if (r != 0)
+ 				return r;
+ 		}
+@@ -8266,7 +8266,7 @@ static int vcpu_enter_guest(struct kvm_vcpu *vcpu)
+ 			goto out;
+ 		}
+ 
+-		if (inject_pending_event(vcpu, req_int_win) != 0)
++		if (inject_pending_event(vcpu) != 0)
+ 			req_immediate_exit = true;
+ 		else {
+ 			/* Enable SMI/NMI/IRQ window open exits if needed.
+@@ -8496,7 +8496,7 @@ static inline int vcpu_block(struct kvm *kvm, struct kvm_vcpu *vcpu)
+ static inline bool kvm_vcpu_running(struct kvm_vcpu *vcpu)
+ {
+ 	if (is_guest_mode(vcpu) && kvm_x86_ops->check_nested_events)
+-		kvm_x86_ops->check_nested_events(vcpu, false);
++		kvm_x86_ops->check_nested_events(vcpu);
+ 
+ 	return (vcpu->arch.mp_state == KVM_MP_STATE_RUNNABLE &&
+ 		!vcpu->arch.apf.halted);
+@@ -9873,6 +9873,13 @@ int kvm_arch_create_memslot(struct kvm *kvm, struct kvm_memory_slot *slot,
+ {
+ 	int i;
+ 
++	/*
++	 * Clear out the previous array pointers for the KVM_MR_MOVE case.  The
++	 * old arrays will be freed by __kvm_set_memory_region() if installing
++	 * the new memslot is successful.
++	 */
++	memset(&slot->arch, 0, sizeof(slot->arch));
++
+ 	for (i = 0; i < KVM_NR_PAGE_SIZES; ++i) {
+ 		struct kvm_lpage_info *linfo;
+ 		unsigned long ugfn;
+@@ -9954,6 +9961,10 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm,
+ 				const struct kvm_userspace_memory_region *mem,
+ 				enum kvm_mr_change change)
+ {
++	if (change == KVM_MR_MOVE)
++		return kvm_arch_create_memslot(kvm, memslot,
++					       mem->memory_size >> PAGE_SHIFT);
++
+ 	return 0;
+ }
+ 
+diff --git a/arch/x86/platform/efi/efi.c b/arch/x86/platform/efi/efi.c
+index ae923ee8e2b4..bfe4b0fb6f71 100644
+--- a/arch/x86/platform/efi/efi.c
++++ b/arch/x86/platform/efi/efi.c
+@@ -85,6 +85,8 @@ static const unsigned long * const efi_tables[] = {
+ #ifdef CONFIG_EFI_RCI2_TABLE
+ 	&rci2_table_phys,
+ #endif
++	&efi.tpm_log,
++	&efi.tpm_final_log,
+ };
+ 
+ u64 efi_setup;		/* efi setup_data physical address */
+diff --git a/arch/x86/platform/efi/efi_64.c b/arch/x86/platform/efi/efi_64.c
+index d19a2edd63cb..a47294063882 100644
+--- a/arch/x86/platform/efi/efi_64.c
++++ b/arch/x86/platform/efi/efi_64.c
+@@ -641,7 +641,7 @@ efi_thunk_set_variable(efi_char16_t *name, efi_guid_t *vendor,
+ 	phys_vendor = virt_to_phys_or_null(vnd);
+ 	phys_data = virt_to_phys_or_null_size(data, data_size);
+ 
+-	if (!phys_name || !phys_data)
++	if (!phys_name || (data && !phys_data))
+ 		status = EFI_INVALID_PARAMETER;
+ 	else
+ 		status = efi_thunk(set_variable, phys_name, phys_vendor,
+@@ -672,7 +672,7 @@ efi_thunk_set_variable_nonblocking(efi_char16_t *name, efi_guid_t *vendor,
+ 	phys_vendor = virt_to_phys_or_null(vnd);
+ 	phys_data = virt_to_phys_or_null_size(data, data_size);
+ 
+-	if (!phys_name || !phys_data)
++	if (!phys_name || (data && !phys_data))
+ 		status = EFI_INVALID_PARAMETER;
+ 	else
+ 		status = efi_thunk(set_variable, phys_name, phys_vendor,
+diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
+index f0ff6654af28..9d963ed518d1 100644
+--- a/block/bfq-cgroup.c
++++ b/block/bfq-cgroup.c
+@@ -642,6 +642,12 @@ void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ {
+ 	struct bfq_entity *entity = &bfqq->entity;
+ 
++	/*
++	 * Get extra reference to prevent bfqq from being freed in
++	 * next possible expire or deactivate.
++	 */
++	bfqq->ref++;
++
+ 	/* If bfqq is empty, then bfq_bfqq_expire also invokes
+ 	 * bfq_del_bfqq_busy, thereby removing bfqq and its entity
+ 	 * from data structures related to current group. Otherwise we
+@@ -652,12 +658,6 @@ void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ 		bfq_bfqq_expire(bfqd, bfqd->in_service_queue,
+ 				false, BFQQE_PREEMPTED);
+ 
+-	/*
+-	 * get extra reference to prevent bfqq from being freed in
+-	 * next possible deactivate
+-	 */
+-	bfqq->ref++;
+-
+ 	if (bfq_bfqq_busy(bfqq))
+ 		bfq_deactivate_bfqq(bfqd, bfqq, false, false);
+ 	else if (entity->on_st_or_in_serv)
+@@ -677,7 +677,7 @@ void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ 
+ 	if (!bfqd->in_service_queue && !bfqd->rq_in_driver)
+ 		bfq_schedule_dispatch(bfqd);
+-	/* release extra ref taken above */
++	/* release extra ref taken above, bfqq may happen to be freed now */
+ 	bfq_put_queue(bfqq);
+ }
+ 
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index 8c436abfaf14..4a44c7f19435 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -6215,20 +6215,28 @@ static struct bfq_queue *bfq_init_rq(struct request *rq)
+ 	return bfqq;
+ }
+ 
+-static void bfq_idle_slice_timer_body(struct bfq_queue *bfqq)
++static void
++bfq_idle_slice_timer_body(struct bfq_data *bfqd, struct bfq_queue *bfqq)
+ {
+-	struct bfq_data *bfqd = bfqq->bfqd;
+ 	enum bfqq_expiration reason;
+ 	unsigned long flags;
+ 
+ 	spin_lock_irqsave(&bfqd->lock, flags);
+-	bfq_clear_bfqq_wait_request(bfqq);
+ 
++	/*
++	 * Considering that bfqq may be in race, we should firstly check
++	 * whether bfqq is in service before doing something on it. If
++	 * the bfqq in race is not in service, it has already been expired
++	 * through __bfq_bfqq_expire func and its wait_request flags has
++	 * been cleared in __bfq_bfqd_reset_in_service func.
++	 */
+ 	if (bfqq != bfqd->in_service_queue) {
+ 		spin_unlock_irqrestore(&bfqd->lock, flags);
+ 		return;
+ 	}
+ 
++	bfq_clear_bfqq_wait_request(bfqq);
++
+ 	if (bfq_bfqq_budget_timeout(bfqq))
+ 		/*
+ 		 * Also here the queue can be safely expired
+@@ -6273,7 +6281,7 @@ static enum hrtimer_restart bfq_idle_slice_timer(struct hrtimer *timer)
+ 	 * early.
+ 	 */
+ 	if (bfqq)
+-		bfq_idle_slice_timer_body(bfqq);
++		bfq_idle_slice_timer_body(bfqd, bfqq);
+ 
+ 	return HRTIMER_NORESTART;
+ }
+diff --git a/block/blk-ioc.c b/block/blk-ioc.c
+index 5ed59ac6ae58..9df50fb507ca 100644
+--- a/block/blk-ioc.c
++++ b/block/blk-ioc.c
+@@ -84,6 +84,7 @@ static void ioc_destroy_icq(struct io_cq *icq)
+ 	 * making it impossible to determine icq_cache.  Record it in @icq.
+ 	 */
+ 	icq->__rcu_icq_cache = et->icq_cache;
++	icq->flags |= ICQ_DESTROYED;
+ 	call_rcu(&icq->__rcu_head, icq_free_icq_rcu);
+ }
+ 
+@@ -212,15 +213,21 @@ static void __ioc_clear_queue(struct list_head *icq_list)
+ {
+ 	unsigned long flags;
+ 
++	rcu_read_lock();
+ 	while (!list_empty(icq_list)) {
+ 		struct io_cq *icq = list_entry(icq_list->next,
+ 						struct io_cq, q_node);
+ 		struct io_context *ioc = icq->ioc;
+ 
+ 		spin_lock_irqsave(&ioc->lock, flags);
++		if (icq->flags & ICQ_DESTROYED) {
++			spin_unlock_irqrestore(&ioc->lock, flags);
++			continue;
++		}
+ 		ioc_destroy_icq(icq);
+ 		spin_unlock_irqrestore(&ioc->lock, flags);
+ 	}
++	rcu_read_unlock();
+ }
+ 
+ /**
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index d4bd9b961726..37ff8dfb8ab9 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -2824,7 +2824,6 @@ static void blk_mq_realloc_hw_ctxs(struct blk_mq_tag_set *set,
+ 			memcpy(new_hctxs, hctxs, q->nr_hw_queues *
+ 			       sizeof(*hctxs));
+ 		q->queue_hw_ctx = new_hctxs;
+-		q->nr_hw_queues = set->nr_hw_queues;
+ 		kfree(hctxs);
+ 		hctxs = new_hctxs;
+ 	}
+diff --git a/block/blk-settings.c b/block/blk-settings.c
+index c8eda2e7b91e..be1dca0103a4 100644
+--- a/block/blk-settings.c
++++ b/block/blk-settings.c
+@@ -664,6 +664,9 @@ void disk_stack_limits(struct gendisk *disk, struct block_device *bdev,
+ 		printk(KERN_NOTICE "%s: Warning: Device %s is misaligned\n",
+ 		       top, bottom);
+ 	}
++
++	t->backing_dev_info->io_pages =
++		t->limits.max_sectors >> (PAGE_SHIFT - 9);
+ }
+ EXPORT_SYMBOL(disk_stack_limits);
+ 
+diff --git a/block/blk-zoned.c b/block/blk-zoned.c
+index 05741c6f618b..6b442ae96499 100644
+--- a/block/blk-zoned.c
++++ b/block/blk-zoned.c
+@@ -173,7 +173,7 @@ int blkdev_zone_mgmt(struct block_device *bdev, enum req_opf op,
+ 	if (!op_is_zone_mgmt(op))
+ 		return -EOPNOTSUPP;
+ 
+-	if (!nr_sectors || end_sector > capacity)
++	if (end_sector <= sector || end_sector > capacity)
+ 		/* Out of range */
+ 		return -EINVAL;
+ 
+diff --git a/crypto/rng.c b/crypto/rng.c
+index 1e21231f71c9..1490d210f1a1 100644
+--- a/crypto/rng.c
++++ b/crypto/rng.c
+@@ -37,12 +37,16 @@ int crypto_rng_reset(struct crypto_rng *tfm, const u8 *seed, unsigned int slen)
+ 	crypto_stats_get(alg);
+ 	if (!seed && slen) {
+ 		buf = kmalloc(slen, GFP_KERNEL);
+-		if (!buf)
++		if (!buf) {
++			crypto_alg_put(alg);
+ 			return -ENOMEM;
++		}
+ 
+ 		err = get_random_bytes_wait(buf, slen);
+-		if (err)
++		if (err) {
++			crypto_alg_put(alg);
+ 			goto out;
++		}
+ 		seed = buf;
+ 	}
+ 
+diff --git a/drivers/acpi/acpica/achware.h b/drivers/acpi/acpica/achware.h
+index 6ad0517553d5..ebf6453d0e21 100644
+--- a/drivers/acpi/acpica/achware.h
++++ b/drivers/acpi/acpica/achware.h
+@@ -101,7 +101,7 @@ acpi_status acpi_hw_enable_all_runtime_gpes(void);
+ 
+ acpi_status acpi_hw_enable_all_wakeup_gpes(void);
+ 
+-u8 acpi_hw_check_all_gpes(void);
++u8 acpi_hw_check_all_gpes(acpi_handle gpe_skip_device, u32 gpe_skip_number);
+ 
+ acpi_status
+ acpi_hw_enable_runtime_gpe_block(struct acpi_gpe_xrupt_info *gpe_xrupt_info,
+diff --git a/drivers/acpi/acpica/evxfgpe.c b/drivers/acpi/acpica/evxfgpe.c
+index f2de66bfd8a7..3be60673e461 100644
+--- a/drivers/acpi/acpica/evxfgpe.c
++++ b/drivers/acpi/acpica/evxfgpe.c
+@@ -799,17 +799,19 @@ ACPI_EXPORT_SYMBOL(acpi_enable_all_wakeup_gpes)
+  *
+  * FUNCTION:    acpi_any_gpe_status_set
+  *
+- * PARAMETERS:  None
++ * PARAMETERS:  gpe_skip_number      - Number of the GPE to skip
+  *
+  * RETURN:      Whether or not the status bit is set for any GPE
+  *
+- * DESCRIPTION: Check the status bits of all enabled GPEs and return TRUE if any
+- *              of them is set or FALSE otherwise.
++ * DESCRIPTION: Check the status bits of all enabled GPEs, except for the one
++ *              represented by the "skip" argument, and return TRUE if any of
++ *              them is set or FALSE otherwise.
+  *
+  ******************************************************************************/
+-u32 acpi_any_gpe_status_set(void)
++u32 acpi_any_gpe_status_set(u32 gpe_skip_number)
+ {
+ 	acpi_status status;
++	acpi_handle gpe_device;
+ 	u8 ret;
+ 
+ 	ACPI_FUNCTION_TRACE(acpi_any_gpe_status_set);
+@@ -819,7 +821,12 @@ u32 acpi_any_gpe_status_set(void)
+ 		return (FALSE);
+ 	}
+ 
+-	ret = acpi_hw_check_all_gpes();
++	status = acpi_get_gpe_device(gpe_skip_number, &gpe_device);
++	if (ACPI_FAILURE(status)) {
++		gpe_device = NULL;
++	}
++
++	ret = acpi_hw_check_all_gpes(gpe_device, gpe_skip_number);
+ 	(void)acpi_ut_release_mutex(ACPI_MTX_EVENTS);
+ 
+ 	return (ret);
+diff --git a/drivers/acpi/acpica/hwgpe.c b/drivers/acpi/acpica/hwgpe.c
+index f4c285c2f595..49c46d4dd070 100644
+--- a/drivers/acpi/acpica/hwgpe.c
++++ b/drivers/acpi/acpica/hwgpe.c
+@@ -444,12 +444,19 @@ acpi_hw_enable_wakeup_gpe_block(struct acpi_gpe_xrupt_info *gpe_xrupt_info,
+ 	return (AE_OK);
+ }
+ 
++struct acpi_gpe_block_status_context {
++	struct acpi_gpe_register_info *gpe_skip_register_info;
++	u8 gpe_skip_mask;
++	u8 retval;
++};
++
+ /******************************************************************************
+  *
+  * FUNCTION:    acpi_hw_get_gpe_block_status
+  *
+  * PARAMETERS:  gpe_xrupt_info      - GPE Interrupt info
+  *              gpe_block           - Gpe Block info
++ *              context             - GPE list walk context data
+  *
+  * RETURN:      Success
+  *
+@@ -460,12 +467,13 @@ acpi_hw_enable_wakeup_gpe_block(struct acpi_gpe_xrupt_info *gpe_xrupt_info,
+ static acpi_status
+ acpi_hw_get_gpe_block_status(struct acpi_gpe_xrupt_info *gpe_xrupt_info,
+ 			     struct acpi_gpe_block_info *gpe_block,
+-			     void *ret_ptr)
++			     void *context)
+ {
++	struct acpi_gpe_block_status_context *c = context;
+ 	struct acpi_gpe_register_info *gpe_register_info;
+ 	u64 in_enable, in_status;
+ 	acpi_status status;
+-	u8 *ret = ret_ptr;
++	u8 ret_mask;
+ 	u32 i;
+ 
+ 	/* Examine each GPE Register within the block */
+@@ -485,7 +493,11 @@ acpi_hw_get_gpe_block_status(struct acpi_gpe_xrupt_info *gpe_xrupt_info,
+ 			continue;
+ 		}
+ 
+-		*ret |= in_enable & in_status;
++		ret_mask = in_enable & in_status;
++		if (ret_mask && c->gpe_skip_register_info == gpe_register_info) {
++			ret_mask &= ~c->gpe_skip_mask;
++		}
++		c->retval |= ret_mask;
+ 	}
+ 
+ 	return (AE_OK);
+@@ -561,24 +573,41 @@ acpi_status acpi_hw_enable_all_wakeup_gpes(void)
+  *
+  * FUNCTION:    acpi_hw_check_all_gpes
+  *
+- * PARAMETERS:  None
++ * PARAMETERS:  gpe_skip_device      - GPE devoce of the GPE to skip
++ *              gpe_skip_number      - Number of the GPE to skip
+  *
+  * RETURN:      Combined status of all GPEs
+  *
+- * DESCRIPTION: Check all enabled GPEs in all GPE blocks and return TRUE if the
++ * DESCRIPTION: Check all enabled GPEs in all GPE blocks, except for the one
++ *              represented by the "skip" arguments, and return TRUE if the
+  *              status bit is set for at least one of them of FALSE otherwise.
+  *
+  ******************************************************************************/
+ 
+-u8 acpi_hw_check_all_gpes(void)
++u8 acpi_hw_check_all_gpes(acpi_handle gpe_skip_device, u32 gpe_skip_number)
+ {
+-	u8 ret = 0;
++	struct acpi_gpe_block_status_context context = {
++		.gpe_skip_register_info = NULL,
++		.retval = 0,
++	};
++	struct acpi_gpe_event_info *gpe_event_info;
++	acpi_cpu_flags flags;
+ 
+ 	ACPI_FUNCTION_TRACE(acpi_hw_check_all_gpes);
+ 
+-	(void)acpi_ev_walk_gpe_list(acpi_hw_get_gpe_block_status, &ret);
++	flags = acpi_os_acquire_lock(acpi_gbl_gpe_lock);
++
++	gpe_event_info = acpi_ev_get_gpe_event_info(gpe_skip_device,
++						    gpe_skip_number);
++	if (gpe_event_info) {
++		context.gpe_skip_register_info = gpe_event_info->register_info;
++		context.gpe_skip_mask = acpi_hw_get_gpe_register_bit(gpe_event_info);
++	}
++
++	acpi_os_release_lock(acpi_gbl_gpe_lock, flags);
+ 
+-	return (ret != 0);
++	(void)acpi_ev_walk_gpe_list(acpi_hw_get_gpe_block_status, &context);
++	return (context.retval != 0);
+ }
+ 
+ #endif				/* !ACPI_REDUCED_HARDWARE */
+diff --git a/drivers/acpi/ec.c b/drivers/acpi/ec.c
+index d1f1cf5d4bf0..29b8fa618a02 100644
+--- a/drivers/acpi/ec.c
++++ b/drivers/acpi/ec.c
+@@ -1584,14 +1584,19 @@ static int acpi_ec_setup(struct acpi_ec *ec, struct acpi_device *device,
+ 		return ret;
+ 
+ 	/* First EC capable of handling transactions */
+-	if (!first_ec) {
++	if (!first_ec)
+ 		first_ec = ec;
+-		acpi_handle_info(first_ec->handle, "Used as first EC\n");
++
++	pr_info("EC_CMD/EC_SC=0x%lx, EC_DATA=0x%lx\n", ec->command_addr,
++		ec->data_addr);
++
++	if (test_bit(EC_FLAGS_EVENT_HANDLER_INSTALLED, &ec->flags)) {
++		if (ec->gpe >= 0)
++			pr_info("GPE=0x%x\n", ec->gpe);
++		else
++			pr_info("IRQ=%d\n", ec->irq);
+ 	}
+ 
+-	acpi_handle_info(ec->handle,
+-			 "GPE=0x%x, IRQ=%d, EC_CMD/EC_SC=0x%lx, EC_DATA=0x%lx\n",
+-			 ec->gpe, ec->irq, ec->command_addr, ec->data_addr);
+ 	return ret;
+ }
+ 
+@@ -1641,7 +1646,6 @@ static int acpi_ec_add(struct acpi_device *device)
+ 
+ 		if (boot_ec && ec->command_addr == boot_ec->command_addr &&
+ 		    ec->data_addr == boot_ec->data_addr) {
+-			boot_ec_is_ecdt = false;
+ 			/*
+ 			 * Trust PNP0C09 namespace location rather than
+ 			 * ECDT ID. But trust ECDT GPE rather than _GPE
+@@ -1661,9 +1665,12 @@ static int acpi_ec_add(struct acpi_device *device)
+ 
+ 	if (ec == boot_ec)
+ 		acpi_handle_info(boot_ec->handle,
+-				 "Boot %s EC used to handle transactions and events\n",
++				 "Boot %s EC initialization complete\n",
+ 				 boot_ec_is_ecdt ? "ECDT" : "DSDT");
+ 
++	acpi_handle_info(ec->handle,
++			 "EC: Used to handle transactions and events\n");
++
+ 	device->driver_data = ec;
+ 
+ 	ret = !!request_region(ec->data_addr, 1, "EC data");
+@@ -2037,6 +2044,11 @@ void acpi_ec_set_gpe_wake_mask(u8 action)
+ 		acpi_set_gpe_wake_mask(NULL, first_ec->gpe, action);
+ }
+ 
++bool acpi_ec_other_gpes_active(void)
++{
++	return acpi_any_gpe_status_set(first_ec ? first_ec->gpe : U32_MAX);
++}
++
+ bool acpi_ec_dispatch_gpe(void)
+ {
+ 	u32 ret;
+diff --git a/drivers/acpi/internal.h b/drivers/acpi/internal.h
+index 3616daec650b..d44c591c4ee4 100644
+--- a/drivers/acpi/internal.h
++++ b/drivers/acpi/internal.h
+@@ -202,6 +202,7 @@ void acpi_ec_remove_query_handler(struct acpi_ec *ec, u8 query_bit);
+ 
+ #ifdef CONFIG_PM_SLEEP
+ void acpi_ec_flush_work(void);
++bool acpi_ec_other_gpes_active(void);
+ bool acpi_ec_dispatch_gpe(void);
+ #endif
+ 
+diff --git a/drivers/acpi/sleep.c b/drivers/acpi/sleep.c
+index ce49cbfa941b..f4dbdfafafe3 100644
+--- a/drivers/acpi/sleep.c
++++ b/drivers/acpi/sleep.c
+@@ -1017,18 +1017,19 @@ static bool acpi_s2idle_wake(void)
+ 			return true;
+ 
+ 		/*
+-		 * If there are no EC events to process and at least one of the
+-		 * other enabled GPEs is active, the wakeup is regarded as a
+-		 * genuine one.
+-		 *
+-		 * Note that the checks below must be carried out in this order
+-		 * to avoid returning prematurely due to a change of the EC GPE
+-		 * status bit from unset to set between the checks with the
+-		 * status bits of all the other GPEs unset.
++		 * If the status bit is set for any enabled GPE other than the
++		 * EC one, the wakeup is regarded as a genuine one.
+ 		 */
+-		if (acpi_any_gpe_status_set() && !acpi_ec_dispatch_gpe())
++		if (acpi_ec_other_gpes_active())
+ 			return true;
+ 
++		/*
++		 * If the EC GPE status bit has not been set, the wakeup is
++		 * regarded as a spurious one.
++		 */
++		if (!acpi_ec_dispatch_gpe())
++			return false;
++
+ 		/*
+ 		 * Cancel the wakeup and process all pending events in case
+ 		 * there are any wakeup ones in there.
+diff --git a/drivers/ata/libata-pmp.c b/drivers/ata/libata-pmp.c
+index 3ff14071617c..79f2aeeb482a 100644
+--- a/drivers/ata/libata-pmp.c
++++ b/drivers/ata/libata-pmp.c
+@@ -763,6 +763,7 @@ static int sata_pmp_eh_recover_pmp(struct ata_port *ap,
+ 
+ 	if (dev->flags & ATA_DFLAG_DETACH) {
+ 		detach = 1;
++		rc = -ENODEV;
+ 		goto fail;
+ 	}
+ 
+diff --git a/drivers/ata/libata-scsi.c b/drivers/ata/libata-scsi.c
+index eb2eb599e602..061eebf85e6d 100644
+--- a/drivers/ata/libata-scsi.c
++++ b/drivers/ata/libata-scsi.c
+@@ -4562,22 +4562,19 @@ int ata_scsi_add_hosts(struct ata_host *host, struct scsi_host_template *sht)
+ 		 */
+ 		shost->max_host_blocked = 1;
+ 
+-		rc = scsi_add_host_with_dma(ap->scsi_host,
+-						&ap->tdev, ap->host->dev);
++		rc = scsi_add_host_with_dma(shost, &ap->tdev, ap->host->dev);
+ 		if (rc)
+-			goto err_add;
++			goto err_alloc;
+ 	}
+ 
+ 	return 0;
+ 
+- err_add:
+-	scsi_host_put(host->ports[i]->scsi_host);
+  err_alloc:
+ 	while (--i >= 0) {
+ 		struct Scsi_Host *shost = host->ports[i]->scsi_host;
+ 
++		/* scsi_host_put() is in ata_devres_release() */
+ 		scsi_remove_host(shost);
+-		scsi_host_put(shost);
+ 	}
+ 	return rc;
+ }
+diff --git a/drivers/base/firmware_loader/fallback.c b/drivers/base/firmware_loader/fallback.c
+index 8704e1bae175..1e9c96e3ed63 100644
+--- a/drivers/base/firmware_loader/fallback.c
++++ b/drivers/base/firmware_loader/fallback.c
+@@ -525,7 +525,7 @@ static int fw_load_sysfs_fallback(struct fw_sysfs *fw_sysfs,
+ 	}
+ 
+ 	retval = fw_sysfs_wait_timeout(fw_priv, timeout);
+-	if (retval < 0) {
++	if (retval < 0 && retval != -ENOENT) {
+ 		mutex_lock(&fw_lock);
+ 		fw_load_abort(fw_sysfs);
+ 		mutex_unlock(&fw_lock);
+diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
+index 959d6d5eb000..0a01df608849 100644
+--- a/drivers/base/power/domain.c
++++ b/drivers/base/power/domain.c
+@@ -2653,7 +2653,7 @@ static int genpd_iterate_idle_states(struct device_node *dn,
+ 
+ 	ret = of_count_phandle_with_args(dn, "domain-idle-states", NULL);
+ 	if (ret <= 0)
+-		return ret;
++		return ret == -ENOENT ? 0 : ret;
+ 
+ 	/* Loop over the phandles until all the requested entry is found */
+ 	of_for_each_phandle(&it, ret, dn, "domain-idle-states", NULL, 0) {
+diff --git a/drivers/base/power/wakeup.c b/drivers/base/power/wakeup.c
+index 27f3e60608e5..6dffcb71b86c 100644
+--- a/drivers/base/power/wakeup.c
++++ b/drivers/base/power/wakeup.c
+@@ -241,7 +241,9 @@ void wakeup_source_unregister(struct wakeup_source *ws)
+ {
+ 	if (ws) {
+ 		wakeup_source_remove(ws);
+-		wakeup_source_sysfs_remove(ws);
++		if (ws->dev)
++			wakeup_source_sysfs_remove(ws);
++
+ 		wakeup_source_destroy(ws);
+ 	}
+ }
+diff --git a/drivers/block/null_blk_main.c b/drivers/block/null_blk_main.c
+index 133060431dbd..d6a8d66e9803 100644
+--- a/drivers/block/null_blk_main.c
++++ b/drivers/block/null_blk_main.c
+@@ -276,7 +276,7 @@ nullb_device_##NAME##_store(struct config_item *item, const char *page,	\
+ {									\
+ 	int (*apply_fn)(struct nullb_device *dev, TYPE new_value) = APPLY;\
+ 	struct nullb_device *dev = to_nullb_device(item);		\
+-	TYPE uninitialized_var(new_value);				\
++	TYPE new_value = 0;						\
+ 	int ret;							\
+ 									\
+ 	ret = nullb_device_##TYPE##_attr_store(&new_value, page, count);\
+@@ -605,6 +605,7 @@ static struct nullb_cmd *__alloc_cmd(struct nullb_queue *nq)
+ 	if (tag != -1U) {
+ 		cmd = &nq->cmds[tag];
+ 		cmd->tag = tag;
++		cmd->error = BLK_STS_OK;
+ 		cmd->nq = nq;
+ 		if (nq->dev->irqmode == NULL_IRQ_TIMER) {
+ 			hrtimer_init(&cmd->timer, CLOCK_MONOTONIC,
+@@ -1385,6 +1386,7 @@ static blk_status_t null_queue_rq(struct blk_mq_hw_ctx *hctx,
+ 		cmd->timer.function = null_cmd_timer_expired;
+ 	}
+ 	cmd->rq = bd->rq;
++	cmd->error = BLK_STS_OK;
+ 	cmd->nq = nq;
+ 
+ 	blk_mq_start_request(bd->rq);
+@@ -1432,7 +1434,12 @@ static void cleanup_queues(struct nullb *nullb)
+ 
+ static void null_del_dev(struct nullb *nullb)
+ {
+-	struct nullb_device *dev = nullb->dev;
++	struct nullb_device *dev;
++
++	if (!nullb)
++		return;
++
++	dev = nullb->dev;
+ 
+ 	ida_simple_remove(&nullb_indexes, nullb->index);
+ 
+@@ -1788,6 +1795,7 @@ out_cleanup_queues:
+ 	cleanup_queues(nullb);
+ out_free_nullb:
+ 	kfree(nullb);
++	dev->nullb = NULL;
+ out:
+ 	return rv;
+ }
+diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
+index 9df516a56bb2..b32877e0b384 100644
+--- a/drivers/block/xen-blkfront.c
++++ b/drivers/block/xen-blkfront.c
+@@ -47,6 +47,7 @@
+ #include <linux/bitmap.h>
+ #include <linux/list.h>
+ #include <linux/workqueue.h>
++#include <linux/sched/mm.h>
+ 
+ #include <xen/xen.h>
+ #include <xen/xenbus.h>
+@@ -2189,10 +2190,12 @@ static void blkfront_setup_discard(struct blkfront_info *info)
+ 
+ static int blkfront_setup_indirect(struct blkfront_ring_info *rinfo)
+ {
+-	unsigned int psegs, grants;
++	unsigned int psegs, grants, memflags;
+ 	int err, i;
+ 	struct blkfront_info *info = rinfo->dev_info;
+ 
++	memflags = memalloc_noio_save();
++
+ 	if (info->max_indirect_segments == 0) {
+ 		if (!HAS_EXTRA_REQ)
+ 			grants = BLKIF_MAX_SEGMENTS_PER_REQUEST;
+@@ -2224,7 +2227,7 @@ static int blkfront_setup_indirect(struct blkfront_ring_info *rinfo)
+ 
+ 		BUG_ON(!list_empty(&rinfo->indirect_pages));
+ 		for (i = 0; i < num; i++) {
+-			struct page *indirect_page = alloc_page(GFP_NOIO);
++			struct page *indirect_page = alloc_page(GFP_KERNEL);
+ 			if (!indirect_page)
+ 				goto out_of_memory;
+ 			list_add(&indirect_page->lru, &rinfo->indirect_pages);
+@@ -2235,15 +2238,15 @@ static int blkfront_setup_indirect(struct blkfront_ring_info *rinfo)
+ 		rinfo->shadow[i].grants_used =
+ 			kvcalloc(grants,
+ 				 sizeof(rinfo->shadow[i].grants_used[0]),
+-				 GFP_NOIO);
++				 GFP_KERNEL);
+ 		rinfo->shadow[i].sg = kvcalloc(psegs,
+ 					       sizeof(rinfo->shadow[i].sg[0]),
+-					       GFP_NOIO);
++					       GFP_KERNEL);
+ 		if (info->max_indirect_segments)
+ 			rinfo->shadow[i].indirect_grants =
+ 				kvcalloc(INDIRECT_GREFS(grants),
+ 					 sizeof(rinfo->shadow[i].indirect_grants[0]),
+-					 GFP_NOIO);
++					 GFP_KERNEL);
+ 		if ((rinfo->shadow[i].grants_used == NULL) ||
+ 			(rinfo->shadow[i].sg == NULL) ||
+ 		     (info->max_indirect_segments &&
+@@ -2252,6 +2255,7 @@ static int blkfront_setup_indirect(struct blkfront_ring_info *rinfo)
+ 		sg_init_table(rinfo->shadow[i].sg, psegs);
+ 	}
+ 
++	memalloc_noio_restore(memflags);
+ 
+ 	return 0;
+ 
+@@ -2271,6 +2275,9 @@ out_of_memory:
+ 			__free_page(indirect_page);
+ 		}
+ 	}
++
++	memalloc_noio_restore(memflags);
++
+ 	return -ENOMEM;
+ }
+ 
+diff --git a/drivers/char/ipmi/ipmi_msghandler.c b/drivers/char/ipmi/ipmi_msghandler.c
+index cad9563f8f48..4c51f794d04c 100644
+--- a/drivers/char/ipmi/ipmi_msghandler.c
++++ b/drivers/char/ipmi/ipmi_msghandler.c
+@@ -3188,8 +3188,8 @@ static void __get_guid(struct ipmi_smi *intf)
+ 	if (rv)
+ 		/* Send failed, no GUID available. */
+ 		bmc->dyn_guid_set = 0;
+-
+-	wait_event(intf->waitq, bmc->dyn_guid_set != 2);
++	else
++		wait_event(intf->waitq, bmc->dyn_guid_set != 2);
+ 
+ 	/* dyn_guid_set makes the guid data available. */
+ 	smp_rmb();
+diff --git a/drivers/char/tpm/eventlog/common.c b/drivers/char/tpm/eventlog/common.c
+index 7a0fca659b6a..7460f230bae4 100644
+--- a/drivers/char/tpm/eventlog/common.c
++++ b/drivers/char/tpm/eventlog/common.c
+@@ -99,11 +99,8 @@ static int tpm_read_log(struct tpm_chip *chip)
+  *
+  * If an event log is found then the securityfs files are setup to
+  * export it to userspace, otherwise nothing is done.
+- *
+- * Returns -ENODEV if the firmware has no event log or securityfs is not
+- * supported.
+  */
+-int tpm_bios_log_setup(struct tpm_chip *chip)
++void tpm_bios_log_setup(struct tpm_chip *chip)
+ {
+ 	const char *name = dev_name(&chip->dev);
+ 	unsigned int cnt;
+@@ -112,7 +109,7 @@ int tpm_bios_log_setup(struct tpm_chip *chip)
+ 
+ 	rc = tpm_read_log(chip);
+ 	if (rc < 0)
+-		return rc;
++		return;
+ 	log_version = rc;
+ 
+ 	cnt = 0;
+@@ -158,13 +155,12 @@ int tpm_bios_log_setup(struct tpm_chip *chip)
+ 		cnt++;
+ 	}
+ 
+-	return 0;
++	return;
+ 
+ err:
+-	rc = PTR_ERR(chip->bios_dir[cnt]);
+ 	chip->bios_dir[cnt] = NULL;
+ 	tpm_bios_log_teardown(chip);
+-	return rc;
++	return;
+ }
+ 
+ void tpm_bios_log_teardown(struct tpm_chip *chip)
+diff --git a/drivers/char/tpm/eventlog/tpm1.c b/drivers/char/tpm/eventlog/tpm1.c
+index 739b1d9d16b6..2c96977ad080 100644
+--- a/drivers/char/tpm/eventlog/tpm1.c
++++ b/drivers/char/tpm/eventlog/tpm1.c
+@@ -115,6 +115,7 @@ static void *tpm1_bios_measurements_next(struct seq_file *m, void *v,
+ 	u32 converted_event_size;
+ 	u32 converted_event_type;
+ 
++	(*pos)++;
+ 	converted_event_size = do_endian_conversion(event->event_size);
+ 
+ 	v += sizeof(struct tcpa_event) + converted_event_size;
+@@ -132,7 +133,6 @@ static void *tpm1_bios_measurements_next(struct seq_file *m, void *v,
+ 	    ((v + sizeof(struct tcpa_event) + converted_event_size) > limit))
+ 		return NULL;
+ 
+-	(*pos)++;
+ 	return v;
+ }
+ 
+diff --git a/drivers/char/tpm/eventlog/tpm2.c b/drivers/char/tpm/eventlog/tpm2.c
+index b9aeda1cbcd7..e741b1157525 100644
+--- a/drivers/char/tpm/eventlog/tpm2.c
++++ b/drivers/char/tpm/eventlog/tpm2.c
+@@ -94,6 +94,7 @@ static void *tpm2_bios_measurements_next(struct seq_file *m, void *v,
+ 	size_t event_size;
+ 	void *marker;
+ 
++	(*pos)++;
+ 	event_header = log->bios_event_log;
+ 
+ 	if (v == SEQ_START_TOKEN) {
+@@ -118,7 +119,6 @@ static void *tpm2_bios_measurements_next(struct seq_file *m, void *v,
+ 	if (((v + event_size) >= limit) || (event_size == 0))
+ 		return NULL;
+ 
+-	(*pos)++;
+ 	return v;
+ }
+ 
+diff --git a/drivers/char/tpm/tpm-chip.c b/drivers/char/tpm/tpm-chip.c
+index 3d6d394a8661..58073836b555 100644
+--- a/drivers/char/tpm/tpm-chip.c
++++ b/drivers/char/tpm/tpm-chip.c
+@@ -596,9 +596,7 @@ int tpm_chip_register(struct tpm_chip *chip)
+ 
+ 	tpm_sysfs_add_device(chip);
+ 
+-	rc = tpm_bios_log_setup(chip);
+-	if (rc != 0 && rc != -ENODEV)
+-		return rc;
++	tpm_bios_log_setup(chip);
+ 
+ 	tpm_add_ppi(chip);
+ 
+diff --git a/drivers/char/tpm/tpm.h b/drivers/char/tpm/tpm.h
+index 5620747da0cf..2b2c225e1190 100644
+--- a/drivers/char/tpm/tpm.h
++++ b/drivers/char/tpm/tpm.h
+@@ -235,7 +235,7 @@ int tpm2_prepare_space(struct tpm_chip *chip, struct tpm_space *space, u8 *cmd,
+ int tpm2_commit_space(struct tpm_chip *chip, struct tpm_space *space, void *buf,
+ 		      size_t *bufsiz);
+ 
+-int tpm_bios_log_setup(struct tpm_chip *chip);
++void tpm_bios_log_setup(struct tpm_chip *chip);
+ void tpm_bios_log_teardown(struct tpm_chip *chip);
+ int tpm_dev_common_init(void);
+ void tpm_dev_common_exit(void);
+diff --git a/drivers/clk/ingenic/jz4770-cgu.c b/drivers/clk/ingenic/jz4770-cgu.c
+index 956dd653a43d..c051ecba5cf8 100644
+--- a/drivers/clk/ingenic/jz4770-cgu.c
++++ b/drivers/clk/ingenic/jz4770-cgu.c
+@@ -432,8 +432,10 @@ static void __init jz4770_cgu_init(struct device_node *np)
+ 
+ 	cgu = ingenic_cgu_new(jz4770_cgu_clocks,
+ 			      ARRAY_SIZE(jz4770_cgu_clocks), np);
+-	if (!cgu)
++	if (!cgu) {
+ 		pr_err("%s: failed to initialise CGU\n", __func__);
++		return;
++	}
+ 
+ 	retval = ingenic_cgu_register_clocks(cgu);
+ 	if (retval)
+diff --git a/drivers/clk/ingenic/tcu.c b/drivers/clk/ingenic/tcu.c
+index ad7daa494fd4..cd537c3db782 100644
+--- a/drivers/clk/ingenic/tcu.c
++++ b/drivers/clk/ingenic/tcu.c
+@@ -189,7 +189,7 @@ static long ingenic_tcu_round_rate(struct clk_hw *hw, unsigned long req_rate,
+ 	u8 prescale;
+ 
+ 	if (req_rate > rate)
+-		return -EINVAL;
++		return rate;
+ 
+ 	prescale = ingenic_tcu_get_prescale(rate, req_rate);
+ 
+diff --git a/drivers/clocksource/timer-microchip-pit64b.c b/drivers/clocksource/timer-microchip-pit64b.c
+index bd63d3484838..59e11ca8ee73 100644
+--- a/drivers/clocksource/timer-microchip-pit64b.c
++++ b/drivers/clocksource/timer-microchip-pit64b.c
+@@ -264,6 +264,7 @@ static int __init mchp_pit64b_init_mode(struct mchp_pit64b_timer *timer,
+ 
+ 	if (!best_diff) {
+ 		timer->mode |= MCHP_PIT64B_MR_SGCLK;
++		clk_set_rate(timer->gclk, gclk_round);
+ 		goto done;
+ 	}
+ 
+diff --git a/drivers/cpufreq/imx6q-cpufreq.c b/drivers/cpufreq/imx6q-cpufreq.c
+index 648a09a1778a..edef3399c979 100644
+--- a/drivers/cpufreq/imx6q-cpufreq.c
++++ b/drivers/cpufreq/imx6q-cpufreq.c
+@@ -280,6 +280,9 @@ static int imx6ul_opp_check_speed_grading(struct device *dev)
+ 		void __iomem *base;
+ 
+ 		np = of_find_compatible_node(NULL, NULL, "fsl,imx6ul-ocotp");
++		if (!np)
++			np = of_find_compatible_node(NULL, NULL,
++						     "fsl,imx6ull-ocotp");
+ 		if (!np)
+ 			return -ENOENT;
+ 
+@@ -378,23 +381,24 @@ static int imx6q_cpufreq_probe(struct platform_device *pdev)
+ 		goto put_reg;
+ 	}
+ 
++	/* Because we have added the OPPs here, we must free them */
++	free_opp = true;
++
+ 	if (of_machine_is_compatible("fsl,imx6ul") ||
+ 	    of_machine_is_compatible("fsl,imx6ull")) {
+ 		ret = imx6ul_opp_check_speed_grading(cpu_dev);
+ 		if (ret) {
+ 			if (ret == -EPROBE_DEFER)
+-				goto put_node;
++				goto out_free_opp;
+ 
+ 			dev_err(cpu_dev, "failed to read ocotp: %d\n",
+ 				ret);
+-			goto put_node;
++			goto out_free_opp;
+ 		}
+ 	} else {
+ 		imx6q_opp_check_speed_grading(cpu_dev);
+ 	}
+ 
+-	/* Because we have added the OPPs here, we must free them */
+-	free_opp = true;
+ 	num = dev_pm_opp_get_opp_count(cpu_dev);
+ 	if (num < 0) {
+ 		ret = num;
+diff --git a/drivers/cpufreq/powernv-cpufreq.c b/drivers/cpufreq/powernv-cpufreq.c
+index 56f4bc0d209e..1806b1da4366 100644
+--- a/drivers/cpufreq/powernv-cpufreq.c
++++ b/drivers/cpufreq/powernv-cpufreq.c
+@@ -1080,6 +1080,12 @@ free_and_return:
+ 
+ static inline void clean_chip_info(void)
+ {
++	int i;
++
++	/* flush any pending work items */
++	if (chips)
++		for (i = 0; i < nr_chips; i++)
++			cancel_work_sync(&chips[i].throttle);
+ 	kfree(chips);
+ }
+ 
+diff --git a/drivers/crypto/caam/caamalg_desc.c b/drivers/crypto/caam/caamalg_desc.c
+index aa9ccca67045..d6c58184bb57 100644
+--- a/drivers/crypto/caam/caamalg_desc.c
++++ b/drivers/crypto/caam/caamalg_desc.c
+@@ -1379,6 +1379,9 @@ void cnstr_shdsc_skcipher_encap(u32 * const desc, struct alginfo *cdata,
+ 				const u32 ctx1_iv_off)
+ {
+ 	u32 *key_jump_cmd;
++	u32 options = cdata->algtype | OP_ALG_AS_INIT | OP_ALG_ENCRYPT;
++	bool is_chacha20 = ((cdata->algtype & OP_ALG_ALGSEL_MASK) ==
++			    OP_ALG_ALGSEL_CHACHA20);
+ 
+ 	init_sh_desc(desc, HDR_SHARE_SERIAL | HDR_SAVECTX);
+ 	/* Skip if already shared */
+@@ -1417,14 +1420,15 @@ void cnstr_shdsc_skcipher_encap(u32 * const desc, struct alginfo *cdata,
+ 				      LDST_OFFSET_SHIFT));
+ 
+ 	/* Load operation */
+-	append_operation(desc, cdata->algtype | OP_ALG_AS_INIT |
+-			 OP_ALG_ENCRYPT);
++	if (is_chacha20)
++		options |= OP_ALG_AS_FINALIZE;
++	append_operation(desc, options);
+ 
+ 	/* Perform operation */
+ 	skcipher_append_src_dst(desc);
+ 
+ 	/* Store IV */
+-	if (ivsize)
++	if (!is_chacha20 && ivsize)
+ 		append_seq_store(desc, ivsize, LDST_SRCDST_BYTE_CONTEXT |
+ 				 LDST_CLASS_1_CCB | (ctx1_iv_off <<
+ 				 LDST_OFFSET_SHIFT));
+@@ -1451,6 +1455,8 @@ void cnstr_shdsc_skcipher_decap(u32 * const desc, struct alginfo *cdata,
+ 				const u32 ctx1_iv_off)
+ {
+ 	u32 *key_jump_cmd;
++	bool is_chacha20 = ((cdata->algtype & OP_ALG_ALGSEL_MASK) ==
++			    OP_ALG_ALGSEL_CHACHA20);
+ 
+ 	init_sh_desc(desc, HDR_SHARE_SERIAL | HDR_SAVECTX);
+ 	/* Skip if already shared */
+@@ -1499,7 +1505,7 @@ void cnstr_shdsc_skcipher_decap(u32 * const desc, struct alginfo *cdata,
+ 	skcipher_append_src_dst(desc);
+ 
+ 	/* Store IV */
+-	if (ivsize)
++	if (!is_chacha20 && ivsize)
+ 		append_seq_store(desc, ivsize, LDST_SRCDST_BYTE_CONTEXT |
+ 				 LDST_CLASS_1_CCB | (ctx1_iv_off <<
+ 				 LDST_OFFSET_SHIFT));
+@@ -1518,7 +1524,13 @@ EXPORT_SYMBOL(cnstr_shdsc_skcipher_decap);
+  */
+ void cnstr_shdsc_xts_skcipher_encap(u32 * const desc, struct alginfo *cdata)
+ {
+-	__be64 sector_size = cpu_to_be64(512);
++	/*
++	 * Set sector size to a big value, practically disabling
++	 * sector size segmentation in xts implementation. We cannot
++	 * take full advantage of this HW feature with existing
++	 * crypto API / dm-crypt SW architecture.
++	 */
++	__be64 sector_size = cpu_to_be64(BIT(15));
+ 	u32 *key_jump_cmd;
+ 
+ 	init_sh_desc(desc, HDR_SHARE_SERIAL | HDR_SAVECTX);
+@@ -1571,7 +1583,13 @@ EXPORT_SYMBOL(cnstr_shdsc_xts_skcipher_encap);
+  */
+ void cnstr_shdsc_xts_skcipher_decap(u32 * const desc, struct alginfo *cdata)
+ {
+-	__be64 sector_size = cpu_to_be64(512);
++	/*
++	 * Set sector size to a big value, practically disabling
++	 * sector size segmentation in xts implementation. We cannot
++	 * take full advantage of this HW feature with existing
++	 * crypto API / dm-crypt SW architecture.
++	 */
++	__be64 sector_size = cpu_to_be64(BIT(15));
+ 	u32 *key_jump_cmd;
+ 
+ 	init_sh_desc(desc, HDR_SHARE_SERIAL | HDR_SAVECTX);
+diff --git a/drivers/crypto/ccree/cc_buffer_mgr.c b/drivers/crypto/ccree/cc_buffer_mgr.c
+index a72586eccd81..954f14bddf1d 100644
+--- a/drivers/crypto/ccree/cc_buffer_mgr.c
++++ b/drivers/crypto/ccree/cc_buffer_mgr.c
+@@ -87,6 +87,8 @@ static unsigned int cc_get_sgl_nents(struct device *dev,
+ {
+ 	unsigned int nents = 0;
+ 
++	*lbytes = 0;
++
+ 	while (nbytes && sg_list) {
+ 		nents++;
+ 		/* get the number of bytes in the last entry */
+@@ -95,6 +97,7 @@ static unsigned int cc_get_sgl_nents(struct device *dev,
+ 				nbytes : sg_list->length;
+ 		sg_list = sg_next(sg_list);
+ 	}
++
+ 	dev_dbg(dev, "nents %d last bytes %d\n", nents, *lbytes);
+ 	return nents;
+ }
+@@ -290,37 +293,25 @@ static int cc_map_sg(struct device *dev, struct scatterlist *sg,
+ 		     unsigned int nbytes, int direction, u32 *nents,
+ 		     u32 max_sg_nents, u32 *lbytes, u32 *mapped_nents)
+ {
+-	if (sg_is_last(sg)) {
+-		/* One entry only case -set to DLLI */
+-		if (dma_map_sg(dev, sg, 1, direction) != 1) {
+-			dev_err(dev, "dma_map_sg() single buffer failed\n");
+-			return -ENOMEM;
+-		}
+-		dev_dbg(dev, "Mapped sg: dma_address=%pad page=%p addr=%pK offset=%u length=%u\n",
+-			&sg_dma_address(sg), sg_page(sg), sg_virt(sg),
+-			sg->offset, sg->length);
+-		*lbytes = nbytes;
+-		*nents = 1;
+-		*mapped_nents = 1;
+-	} else {  /*sg_is_last*/
+-		*nents = cc_get_sgl_nents(dev, sg, nbytes, lbytes);
+-		if (*nents > max_sg_nents) {
+-			*nents = 0;
+-			dev_err(dev, "Too many fragments. current %d max %d\n",
+-				*nents, max_sg_nents);
+-			return -ENOMEM;
+-		}
+-		/* In case of mmu the number of mapped nents might
+-		 * be changed from the original sgl nents
+-		 */
+-		*mapped_nents = dma_map_sg(dev, sg, *nents, direction);
+-		if (*mapped_nents == 0) {
+-			*nents = 0;
+-			dev_err(dev, "dma_map_sg() sg buffer failed\n");
+-			return -ENOMEM;
+-		}
++	int ret = 0;
++
++	*nents = cc_get_sgl_nents(dev, sg, nbytes, lbytes);
++	if (*nents > max_sg_nents) {
++		*nents = 0;
++		dev_err(dev, "Too many fragments. current %d max %d\n",
++			*nents, max_sg_nents);
++		return -ENOMEM;
++	}
++
++	ret = dma_map_sg(dev, sg, *nents, direction);
++	if (dma_mapping_error(dev, ret)) {
++		*nents = 0;
++		dev_err(dev, "dma_map_sg() sg buffer failed %d\n", ret);
++		return -ENOMEM;
+ 	}
+ 
++	*mapped_nents = ret;
++
+ 	return 0;
+ }
+ 
+@@ -555,11 +546,12 @@ void cc_unmap_aead_request(struct device *dev, struct aead_request *req)
+ 		sg_virt(req->src), areq_ctx->src.nents, areq_ctx->assoc.nents,
+ 		areq_ctx->assoclen, req->cryptlen);
+ 
+-	dma_unmap_sg(dev, req->src, sg_nents(req->src), DMA_BIDIRECTIONAL);
++	dma_unmap_sg(dev, req->src, areq_ctx->src.mapped_nents,
++		     DMA_BIDIRECTIONAL);
+ 	if (req->src != req->dst) {
+ 		dev_dbg(dev, "Unmapping dst sgl: req->dst=%pK\n",
+ 			sg_virt(req->dst));
+-		dma_unmap_sg(dev, req->dst, sg_nents(req->dst),
++		dma_unmap_sg(dev, req->dst, areq_ctx->dst.mapped_nents,
+ 			     DMA_BIDIRECTIONAL);
+ 	}
+ 	if (drvdata->coherent &&
+@@ -881,7 +873,7 @@ static int cc_aead_chain_data(struct cc_drvdata *drvdata,
+ 					    &src_last_bytes);
+ 	sg_index = areq_ctx->src_sgl->length;
+ 	//check where the data starts
+-	while (sg_index <= size_to_skip) {
++	while (src_mapped_nents && (sg_index <= size_to_skip)) {
+ 		src_mapped_nents--;
+ 		offset -= areq_ctx->src_sgl->length;
+ 		sgl = sg_next(areq_ctx->src_sgl);
+@@ -902,13 +894,17 @@ static int cc_aead_chain_data(struct cc_drvdata *drvdata,
+ 
+ 	if (req->src != req->dst) {
+ 		size_for_map = areq_ctx->assoclen + req->cryptlen;
+-		size_for_map += (direct == DRV_CRYPTO_DIRECTION_ENCRYPT) ?
+-				authsize : 0;
++
++		if (direct == DRV_CRYPTO_DIRECTION_ENCRYPT)
++			size_for_map += authsize;
++		else
++			size_for_map -= authsize;
++
+ 		if (is_gcm4543)
+ 			size_for_map += crypto_aead_ivsize(tfm);
+ 
+ 		rc = cc_map_sg(dev, req->dst, size_for_map, DMA_BIDIRECTIONAL,
+-			       &areq_ctx->dst.nents,
++			       &areq_ctx->dst.mapped_nents,
+ 			       LLI_MAX_NUM_OF_DATA_ENTRIES, &dst_last_bytes,
+ 			       &dst_mapped_nents);
+ 		if (rc)
+@@ -921,7 +917,7 @@ static int cc_aead_chain_data(struct cc_drvdata *drvdata,
+ 	offset = size_to_skip;
+ 
+ 	//check where the data starts
+-	while (sg_index <= size_to_skip) {
++	while (dst_mapped_nents && sg_index <= size_to_skip) {
+ 		dst_mapped_nents--;
+ 		offset -= areq_ctx->dst_sgl->length;
+ 		sgl = sg_next(areq_ctx->dst_sgl);
+@@ -1117,13 +1113,15 @@ int cc_map_aead_request(struct cc_drvdata *drvdata, struct aead_request *req)
+ 	}
+ 
+ 	size_to_map = req->cryptlen + areq_ctx->assoclen;
+-	if (areq_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_ENCRYPT)
++	/* If we do in-place encryption, we also need the auth tag */
++	if ((areq_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_ENCRYPT) &&
++	   (req->src == req->dst)) {
+ 		size_to_map += authsize;
+-
++	}
+ 	if (is_gcm4543)
+ 		size_to_map += crypto_aead_ivsize(tfm);
+ 	rc = cc_map_sg(dev, req->src, size_to_map, DMA_BIDIRECTIONAL,
+-		       &areq_ctx->src.nents,
++		       &areq_ctx->src.mapped_nents,
+ 		       (LLI_MAX_NUM_OF_ASSOC_DATA_ENTRIES +
+ 			LLI_MAX_NUM_OF_DATA_ENTRIES),
+ 		       &dummy, &mapped_nents);
+diff --git a/drivers/crypto/ccree/cc_buffer_mgr.h b/drivers/crypto/ccree/cc_buffer_mgr.h
+index af434872c6ff..827b6cb1236e 100644
+--- a/drivers/crypto/ccree/cc_buffer_mgr.h
++++ b/drivers/crypto/ccree/cc_buffer_mgr.h
+@@ -25,6 +25,7 @@ enum cc_sg_cpy_direct {
+ 
+ struct cc_mlli {
+ 	cc_sram_addr_t sram_addr;
++	unsigned int mapped_nents;
+ 	unsigned int nents; //sg nents
+ 	unsigned int mlli_nents; //mlli nents might be different than the above
+ };
+diff --git a/drivers/crypto/mxs-dcp.c b/drivers/crypto/mxs-dcp.c
+index 435ac1c83df9..d84530293036 100644
+--- a/drivers/crypto/mxs-dcp.c
++++ b/drivers/crypto/mxs-dcp.c
+@@ -20,6 +20,7 @@
+ #include <crypto/sha.h>
+ #include <crypto/internal/hash.h>
+ #include <crypto/internal/skcipher.h>
++#include <crypto/scatterwalk.h>
+ 
+ #define DCP_MAX_CHANS	4
+ #define DCP_BUF_SZ	PAGE_SIZE
+@@ -611,49 +612,46 @@ static int dcp_sha_req_to_buf(struct crypto_async_request *arq)
+ 	struct dcp_async_ctx *actx = crypto_ahash_ctx(tfm);
+ 	struct dcp_sha_req_ctx *rctx = ahash_request_ctx(req);
+ 	struct hash_alg_common *halg = crypto_hash_alg_common(tfm);
+-	const int nents = sg_nents(req->src);
+ 
+ 	uint8_t *in_buf = sdcp->coh->sha_in_buf;
+ 	uint8_t *out_buf = sdcp->coh->sha_out_buf;
+ 
+-	uint8_t *src_buf;
+-
+ 	struct scatterlist *src;
+ 
+-	unsigned int i, len, clen;
++	unsigned int i, len, clen, oft = 0;
+ 	int ret;
+ 
+ 	int fin = rctx->fini;
+ 	if (fin)
+ 		rctx->fini = 0;
+ 
+-	for_each_sg(req->src, src, nents, i) {
+-		src_buf = sg_virt(src);
+-		len = sg_dma_len(src);
+-
+-		do {
+-			if (actx->fill + len > DCP_BUF_SZ)
+-				clen = DCP_BUF_SZ - actx->fill;
+-			else
+-				clen = len;
+-
+-			memcpy(in_buf + actx->fill, src_buf, clen);
+-			len -= clen;
+-			src_buf += clen;
+-			actx->fill += clen;
++	src = req->src;
++	len = req->nbytes;
+ 
+-			/*
+-			 * If we filled the buffer and still have some
+-			 * more data, submit the buffer.
+-			 */
+-			if (len && actx->fill == DCP_BUF_SZ) {
+-				ret = mxs_dcp_run_sha(req);
+-				if (ret)
+-					return ret;
+-				actx->fill = 0;
+-				rctx->init = 0;
+-			}
+-		} while (len);
++	while (len) {
++		if (actx->fill + len > DCP_BUF_SZ)
++			clen = DCP_BUF_SZ - actx->fill;
++		else
++			clen = len;
++
++		scatterwalk_map_and_copy(in_buf + actx->fill, src, oft, clen,
++					 0);
++
++		len -= clen;
++		oft += clen;
++		actx->fill += clen;
++
++		/*
++		 * If we filled the buffer and still have some
++		 * more data, submit the buffer.
++		 */
++		if (len && actx->fill == DCP_BUF_SZ) {
++			ret = mxs_dcp_run_sha(req);
++			if (ret)
++				return ret;
++			actx->fill = 0;
++			rctx->init = 0;
++		}
+ 	}
+ 
+ 	if (fin) {
+diff --git a/drivers/edac/edac_mc.c b/drivers/edac/edac_mc.c
+index 69e0d90460e6..2349f2ad946b 100644
+--- a/drivers/edac/edac_mc.c
++++ b/drivers/edac/edac_mc.c
+@@ -1180,20 +1180,21 @@ void edac_mc_handle_error(const enum hw_event_mc_err_type type,
+ 		 * channel/memory controller/...  may be affected.
+ 		 * Also, don't show errors for empty DIMM slots.
+ 		 */
+-		if (!e->enable_per_layer_report || !dimm->nr_pages)
++		if (!dimm->nr_pages)
+ 			continue;
+ 
+-		if (n_labels >= EDAC_MAX_LABELS) {
+-			e->enable_per_layer_report = false;
+-			break;
+-		}
+ 		n_labels++;
+-		if (p != e->label) {
+-			strcpy(p, OTHER_LABEL);
+-			p += strlen(OTHER_LABEL);
++		if (n_labels > EDAC_MAX_LABELS) {
++			p = e->label;
++			*p = '\0';
++		} else {
++			if (p != e->label) {
++				strcpy(p, OTHER_LABEL);
++				p += strlen(OTHER_LABEL);
++			}
++			strcpy(p, dimm->label);
++			p += strlen(p);
+ 		}
+-		strcpy(p, dimm->label);
+-		p += strlen(p);
+ 
+ 		/*
+ 		 * get csrow/channel of the DIMM, in order to allow
+diff --git a/drivers/firmware/arm_sdei.c b/drivers/firmware/arm_sdei.c
+index a479023fa036..77eaa9a2fd15 100644
+--- a/drivers/firmware/arm_sdei.c
++++ b/drivers/firmware/arm_sdei.c
+@@ -491,11 +491,6 @@ static int _sdei_event_unregister(struct sdei_event *event)
+ {
+ 	lockdep_assert_held(&sdei_events_lock);
+ 
+-	spin_lock(&sdei_list_lock);
+-	event->reregister = false;
+-	event->reenable = false;
+-	spin_unlock(&sdei_list_lock);
+-
+ 	if (event->type == SDEI_EVENT_TYPE_SHARED)
+ 		return sdei_api_event_unregister(event->event_num);
+ 
+@@ -518,6 +513,11 @@ int sdei_event_unregister(u32 event_num)
+ 			break;
+ 		}
+ 
++		spin_lock(&sdei_list_lock);
++		event->reregister = false;
++		event->reenable = false;
++		spin_unlock(&sdei_list_lock);
++
+ 		err = _sdei_event_unregister(event);
+ 		if (err)
+ 			break;
+@@ -585,26 +585,15 @@ static int _sdei_event_register(struct sdei_event *event)
+ 
+ 	lockdep_assert_held(&sdei_events_lock);
+ 
+-	spin_lock(&sdei_list_lock);
+-	event->reregister = true;
+-	spin_unlock(&sdei_list_lock);
+-
+ 	if (event->type == SDEI_EVENT_TYPE_SHARED)
+ 		return sdei_api_event_register(event->event_num,
+ 					       sdei_entry_point,
+ 					       event->registered,
+ 					       SDEI_EVENT_REGISTER_RM_ANY, 0);
+ 
+-
+ 	err = sdei_do_cross_call(_local_event_register, event);
+-	if (err) {
+-		spin_lock(&sdei_list_lock);
+-		event->reregister = false;
+-		event->reenable = false;
+-		spin_unlock(&sdei_list_lock);
+-
++	if (err)
+ 		sdei_do_cross_call(_local_event_unregister, event);
+-	}
+ 
+ 	return err;
+ }
+@@ -632,8 +621,17 @@ int sdei_event_register(u32 event_num, sdei_event_callback *cb, void *arg)
+ 			break;
+ 		}
+ 
++		spin_lock(&sdei_list_lock);
++		event->reregister = true;
++		spin_unlock(&sdei_list_lock);
++
+ 		err = _sdei_event_register(event);
+ 		if (err) {
++			spin_lock(&sdei_list_lock);
++			event->reregister = false;
++			event->reenable = false;
++			spin_unlock(&sdei_list_lock);
++
+ 			sdei_event_destroy(event);
+ 			pr_warn("Failed to register event %u: %d\n", event_num,
+ 				err);
+diff --git a/drivers/firmware/efi/efi.c b/drivers/firmware/efi/efi.c
+index 21ea99f65113..77cb95f70ed6 100644
+--- a/drivers/firmware/efi/efi.c
++++ b/drivers/firmware/efi/efi.c
+@@ -570,7 +570,7 @@ int __init efi_config_parse_tables(void *config_tables, int count, int sz,
+ 		}
+ 	}
+ 
+-	if (efi_enabled(EFI_MEMMAP))
++	if (!IS_ENABLED(CONFIG_X86_32) && efi_enabled(EFI_MEMMAP))
+ 		efi_memattr_init();
+ 
+ 	efi_tpm_eventlog_init();
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index b8975857d60d..48e2863461b7 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -2285,8 +2285,6 @@ static int amdgpu_device_ip_suspend_phase1(struct amdgpu_device *adev)
+ {
+ 	int i, r;
+ 
+-	amdgpu_device_set_pg_state(adev, AMD_PG_STATE_UNGATE);
+-	amdgpu_device_set_cg_state(adev, AMD_CG_STATE_UNGATE);
+ 
+ 	for (i = adev->num_ip_blocks - 1; i >= 0; i--) {
+ 		if (!adev->ip_blocks[i].status.valid)
+@@ -3309,6 +3307,9 @@ int amdgpu_device_suspend(struct drm_device *dev, bool fbcon)
+ 		}
+ 	}
+ 
++	amdgpu_device_set_pg_state(adev, AMD_PG_STATE_UNGATE);
++	amdgpu_device_set_cg_state(adev, AMD_CG_STATE_UNGATE);
++
+ 	amdgpu_amdkfd_suspend(adev);
+ 
+ 	amdgpu_ras_suspend(adev);
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+index 889154a78c4a..5d5bd34eb4a7 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+@@ -1158,6 +1158,8 @@ static void gfx_v9_0_check_fw_write_wait(struct amdgpu_device *adev)
+ 			adev->gfx.mec_fw_write_wait = true;
+ 		break;
+ 	default:
++		adev->gfx.me_fw_write_wait = true;
++		adev->gfx.mec_fw_write_wait = true;
+ 		break;
+ 	}
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
+index 9ef3f7b91a1d..abdf29afa2f2 100644
+--- a/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
++++ b/drivers/gpu/drm/amd/display/dc/clk_mgr/dcn21/rn_clk_mgr.c
+@@ -643,7 +643,7 @@ static void rn_clk_mgr_helper_populate_bw_params(struct clk_bw_params *bw_params
+ 	/* Find lowest DPM, FCLK is filled in reverse order*/
+ 
+ 	for (i = PP_SMU_NUM_FCLK_DPM_LEVELS - 1; i >= 0; i--) {
+-		if (clock_table->FClocks[i].Freq != 0) {
++		if (clock_table->FClocks[i].Freq != 0 && clock_table->FClocks[i].Vol != 0) {
+ 			j = i;
+ 			break;
+ 		}
+diff --git a/drivers/gpu/drm/amd/powerplay/renoir_ppt.c b/drivers/gpu/drm/amd/powerplay/renoir_ppt.c
+index 3ad0f4aa3aa3..f7a1ce37227c 100644
+--- a/drivers/gpu/drm/amd/powerplay/renoir_ppt.c
++++ b/drivers/gpu/drm/amd/powerplay/renoir_ppt.c
+@@ -240,6 +240,7 @@ static int renoir_print_clk_levels(struct smu_context *smu,
+ 	uint32_t cur_value = 0, value = 0, count = 0, min = 0, max = 0;
+ 	DpmClocks_t *clk_table = smu->smu_table.clocks_table;
+ 	SmuMetrics_t metrics;
++	bool cur_value_match_level = false;
+ 
+ 	if (!clk_table || clk_type >= SMU_CLK_COUNT)
+ 		return -EINVAL;
+@@ -298,8 +299,13 @@ static int renoir_print_clk_levels(struct smu_context *smu,
+ 		GET_DPM_CUR_FREQ(clk_table, clk_type, i, value);
+ 		size += sprintf(buf + size, "%d: %uMhz %s\n", i, value,
+ 				cur_value == value ? "*" : "");
++		if (cur_value == value)
++			cur_value_match_level = true;
+ 	}
+ 
++	if (!cur_value_match_level)
++		size += sprintf(buf + size, "   %uMhz *\n", cur_value);
++
+ 	return size;
+ }
+ 
+@@ -881,6 +887,17 @@ static int renoir_read_sensor(struct smu_context *smu,
+ 	return ret;
+ }
+ 
++static bool renoir_is_dpm_running(struct smu_context *smu)
++{
++	/*
++	 * Util now, the pmfw hasn't exported the interface of SMU
++	 * feature mask to APU SKU so just force on all the feature
++	 * at early initial stage.
++	 */
++	return true;
++
++}
++
+ static const struct pptable_funcs renoir_ppt_funcs = {
+ 	.get_smu_msg_index = renoir_get_smu_msg_index,
+ 	.get_smu_clk_index = renoir_get_smu_clk_index,
+@@ -922,6 +939,7 @@ static const struct pptable_funcs renoir_ppt_funcs = {
+ 	.mode2_reset = smu_v12_0_mode2_reset,
+ 	.set_soft_freq_limited_range = smu_v12_0_set_soft_freq_limited_range,
+ 	.set_driver_table_location = smu_v12_0_set_driver_table_location,
++	.is_dpm_running = renoir_is_dpm_running,
+ };
+ 
+ void renoir_set_ppt_funcs(struct smu_context *smu)
+diff --git a/drivers/gpu/drm/amd/powerplay/renoir_ppt.h b/drivers/gpu/drm/amd/powerplay/renoir_ppt.h
+index 2a390ddd37dd..89cd6da118a3 100644
+--- a/drivers/gpu/drm/amd/powerplay/renoir_ppt.h
++++ b/drivers/gpu/drm/amd/powerplay/renoir_ppt.h
+@@ -37,7 +37,7 @@ extern void renoir_set_ppt_funcs(struct smu_context *smu);
+ 			freq = table->SocClocks[dpm_level].Freq;	\
+ 			break;						\
+ 		case SMU_MCLK:						\
+-			freq = table->MemClocks[dpm_level].Freq;	\
++			freq = table->FClocks[dpm_level].Freq;	\
+ 			break;						\
+ 		case SMU_DCEFCLK:					\
+ 			freq = table->DcfClocks[dpm_level].Freq;	\
+diff --git a/drivers/gpu/drm/bridge/analogix/analogix-anx78xx.c b/drivers/gpu/drm/bridge/analogix/analogix-anx78xx.c
+index 41867be03751..864423f59d66 100644
+--- a/drivers/gpu/drm/bridge/analogix/analogix-anx78xx.c
++++ b/drivers/gpu/drm/bridge/analogix/analogix-anx78xx.c
+@@ -722,10 +722,9 @@ static int anx78xx_dp_link_training(struct anx78xx *anx78xx)
+ 	if (err)
+ 		return err;
+ 
+-	dpcd[0] = drm_dp_max_link_rate(anx78xx->dpcd);
+-	dpcd[0] = drm_dp_link_rate_to_bw_code(dpcd[0]);
+ 	err = regmap_write(anx78xx->map[I2C_IDX_TX_P0],
+-			   SP_DP_MAIN_LINK_BW_SET_REG, dpcd[0]);
++			   SP_DP_MAIN_LINK_BW_SET_REG,
++			   anx78xx->dpcd[DP_MAX_LINK_RATE]);
+ 	if (err)
+ 		return err;
+ 
+diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c
+index ed0fea2ac322..7b7f0da01346 100644
+--- a/drivers/gpu/drm/drm_dp_mst_topology.c
++++ b/drivers/gpu/drm/drm_dp_mst_topology.c
+@@ -3504,9 +3504,9 @@ static int drm_dp_get_vc_payload_bw(u8 dp_link_bw, u8  dp_link_count)
+ int drm_dp_mst_topology_mgr_set_mst(struct drm_dp_mst_topology_mgr *mgr, bool mst_state)
+ {
+ 	int ret = 0;
+-	int i = 0;
+ 	struct drm_dp_mst_branch *mstb = NULL;
+ 
++	mutex_lock(&mgr->payload_lock);
+ 	mutex_lock(&mgr->lock);
+ 	if (mst_state == mgr->mst_state)
+ 		goto out_unlock;
+@@ -3565,27 +3565,19 @@ int drm_dp_mst_topology_mgr_set_mst(struct drm_dp_mst_topology_mgr *mgr, bool ms
+ 		/* this can fail if the device is gone */
+ 		drm_dp_dpcd_writeb(mgr->aux, DP_MSTM_CTRL, 0);
+ 		ret = 0;
+-		mutex_lock(&mgr->payload_lock);
+-		memset(mgr->payloads, 0, mgr->max_payloads * sizeof(struct drm_dp_payload));
++		memset(mgr->payloads, 0,
++		       mgr->max_payloads * sizeof(mgr->payloads[0]));
++		memset(mgr->proposed_vcpis, 0,
++		       mgr->max_payloads * sizeof(mgr->proposed_vcpis[0]));
+ 		mgr->payload_mask = 0;
+ 		set_bit(0, &mgr->payload_mask);
+-		for (i = 0; i < mgr->max_payloads; i++) {
+-			struct drm_dp_vcpi *vcpi = mgr->proposed_vcpis[i];
+-
+-			if (vcpi) {
+-				vcpi->vcpi = 0;
+-				vcpi->num_slots = 0;
+-			}
+-			mgr->proposed_vcpis[i] = NULL;
+-		}
+ 		mgr->vcpi_mask = 0;
+-		mutex_unlock(&mgr->payload_lock);
+-
+ 		mgr->payload_id_table_cleared = false;
+ 	}
+ 
+ out_unlock:
+ 	mutex_unlock(&mgr->lock);
++	mutex_unlock(&mgr->payload_lock);
+ 	if (mstb)
+ 		drm_dp_mst_topology_put_mstb(mstb);
+ 	return ret;
+diff --git a/drivers/gpu/drm/drm_pci.c b/drivers/gpu/drm/drm_pci.c
+index f2e43d341980..d16dac4325f9 100644
+--- a/drivers/gpu/drm/drm_pci.c
++++ b/drivers/gpu/drm/drm_pci.c
+@@ -51,8 +51,6 @@
+ drm_dma_handle_t *drm_pci_alloc(struct drm_device * dev, size_t size, size_t align)
+ {
+ 	drm_dma_handle_t *dmah;
+-	unsigned long addr;
+-	size_t sz;
+ 
+ 	/* pci_alloc_consistent only guarantees alignment to the smallest
+ 	 * PAGE_SIZE order which is greater than or equal to the requested size.
+@@ -68,20 +66,13 @@ drm_dma_handle_t *drm_pci_alloc(struct drm_device * dev, size_t size, size_t ali
+ 	dmah->size = size;
+ 	dmah->vaddr = dma_alloc_coherent(&dev->pdev->dev, size,
+ 					 &dmah->busaddr,
+-					 GFP_KERNEL | __GFP_COMP);
++					 GFP_KERNEL);
+ 
+ 	if (dmah->vaddr == NULL) {
+ 		kfree(dmah);
+ 		return NULL;
+ 	}
+ 
+-	/* XXX - Is virt_to_page() legal for consistent mem? */
+-	/* Reserve */
+-	for (addr = (unsigned long)dmah->vaddr, sz = size;
+-	     sz > 0; addr += PAGE_SIZE, sz -= PAGE_SIZE) {
+-		SetPageReserved(virt_to_page((void *)addr));
+-	}
+-
+ 	return dmah;
+ }
+ 
+@@ -94,19 +85,9 @@ EXPORT_SYMBOL(drm_pci_alloc);
+  */
+ void __drm_legacy_pci_free(struct drm_device * dev, drm_dma_handle_t * dmah)
+ {
+-	unsigned long addr;
+-	size_t sz;
+-
+-	if (dmah->vaddr) {
+-		/* XXX - Is virt_to_page() legal for consistent mem? */
+-		/* Unreserve */
+-		for (addr = (unsigned long)dmah->vaddr, sz = dmah->size;
+-		     sz > 0; addr += PAGE_SIZE, sz -= PAGE_SIZE) {
+-			ClearPageReserved(virt_to_page((void *)addr));
+-		}
++	if (dmah->vaddr)
+ 		dma_free_coherent(&dev->pdev->dev, dmah->size, dmah->vaddr,
+ 				  dmah->busaddr);
+-	}
+ }
+ 
+ /**
+diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c
+index 1de2cde2277c..282774e469ac 100644
+--- a/drivers/gpu/drm/drm_prime.c
++++ b/drivers/gpu/drm/drm_prime.c
+@@ -962,27 +962,40 @@ int drm_prime_sg_to_page_addr_arrays(struct sg_table *sgt, struct page **pages,
+ 	unsigned count;
+ 	struct scatterlist *sg;
+ 	struct page *page;
+-	u32 len, index;
++	u32 page_len, page_index;
+ 	dma_addr_t addr;
++	u32 dma_len, dma_index;
+ 
+-	index = 0;
++	/*
++	 * Scatterlist elements contains both pages and DMA addresses, but
++	 * one shoud not assume 1:1 relation between them. The sg->length is
++	 * the size of the physical memory chunk described by the sg->page,
++	 * while sg_dma_len(sg) is the size of the DMA (IO virtual) chunk
++	 * described by the sg_dma_address(sg).
++	 */
++	page_index = 0;
++	dma_index = 0;
+ 	for_each_sg(sgt->sgl, sg, sgt->nents, count) {
+-		len = sg_dma_len(sg);
++		page_len = sg->length;
+ 		page = sg_page(sg);
++		dma_len = sg_dma_len(sg);
+ 		addr = sg_dma_address(sg);
+ 
+-		while (len > 0) {
+-			if (WARN_ON(index >= max_entries))
++		while (pages && page_len > 0) {
++			if (WARN_ON(page_index >= max_entries))
+ 				return -1;
+-			if (pages)
+-				pages[index] = page;
+-			if (addrs)
+-				addrs[index] = addr;
+-
++			pages[page_index] = page;
+ 			page++;
++			page_len -= PAGE_SIZE;
++			page_index++;
++		}
++		while (addrs && dma_len > 0) {
++			if (WARN_ON(dma_index >= max_entries))
++				return -1;
++			addrs[dma_index] = addr;
+ 			addr += PAGE_SIZE;
+-			len -= PAGE_SIZE;
+-			index++;
++			dma_len -= PAGE_SIZE;
++			dma_index++;
+ 		}
+ 	}
+ 	return 0;
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_perfmon.c b/drivers/gpu/drm/etnaviv/etnaviv_perfmon.c
+index 8adbf2861bff..e6795bafcbb9 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_perfmon.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_perfmon.c
+@@ -32,6 +32,7 @@ struct etnaviv_pm_domain {
+ };
+ 
+ struct etnaviv_pm_domain_meta {
++	unsigned int feature;
+ 	const struct etnaviv_pm_domain *domains;
+ 	u32 nr_domains;
+ };
+@@ -410,36 +411,78 @@ static const struct etnaviv_pm_domain doms_vg[] = {
+ 
+ static const struct etnaviv_pm_domain_meta doms_meta[] = {
+ 	{
++		.feature = chipFeatures_PIPE_3D,
+ 		.nr_domains = ARRAY_SIZE(doms_3d),
+ 		.domains = &doms_3d[0]
+ 	},
+ 	{
++		.feature = chipFeatures_PIPE_2D,
+ 		.nr_domains = ARRAY_SIZE(doms_2d),
+ 		.domains = &doms_2d[0]
+ 	},
+ 	{
++		.feature = chipFeatures_PIPE_VG,
+ 		.nr_domains = ARRAY_SIZE(doms_vg),
+ 		.domains = &doms_vg[0]
+ 	}
+ };
+ 
++static unsigned int num_pm_domains(const struct etnaviv_gpu *gpu)
++{
++	unsigned int num = 0, i;
++
++	for (i = 0; i < ARRAY_SIZE(doms_meta); i++) {
++		const struct etnaviv_pm_domain_meta *meta = &doms_meta[i];
++
++		if (gpu->identity.features & meta->feature)
++			num += meta->nr_domains;
++	}
++
++	return num;
++}
++
++static const struct etnaviv_pm_domain *pm_domain(const struct etnaviv_gpu *gpu,
++	unsigned int index)
++{
++	const struct etnaviv_pm_domain *domain = NULL;
++	unsigned int offset = 0, i;
++
++	for (i = 0; i < ARRAY_SIZE(doms_meta); i++) {
++		const struct etnaviv_pm_domain_meta *meta = &doms_meta[i];
++
++		if (!(gpu->identity.features & meta->feature))
++			continue;
++
++		if (meta->nr_domains < (index - offset)) {
++			offset += meta->nr_domains;
++			continue;
++		}
++
++		domain = meta->domains + (index - offset);
++	}
++
++	return domain;
++}
++
+ int etnaviv_pm_query_dom(struct etnaviv_gpu *gpu,
+ 	struct drm_etnaviv_pm_domain *domain)
+ {
+-	const struct etnaviv_pm_domain_meta *meta = &doms_meta[domain->pipe];
++	const unsigned int nr_domains = num_pm_domains(gpu);
+ 	const struct etnaviv_pm_domain *dom;
+ 
+-	if (domain->iter >= meta->nr_domains)
++	if (domain->iter >= nr_domains)
+ 		return -EINVAL;
+ 
+-	dom = meta->domains + domain->iter;
++	dom = pm_domain(gpu, domain->iter);
++	if (!dom)
++		return -EINVAL;
+ 
+ 	domain->id = domain->iter;
+ 	domain->nr_signals = dom->nr_signals;
+ 	strncpy(domain->name, dom->name, sizeof(domain->name));
+ 
+ 	domain->iter++;
+-	if (domain->iter == meta->nr_domains)
++	if (domain->iter == nr_domains)
+ 		domain->iter = 0xff;
+ 
+ 	return 0;
+@@ -448,14 +491,16 @@ int etnaviv_pm_query_dom(struct etnaviv_gpu *gpu,
+ int etnaviv_pm_query_sig(struct etnaviv_gpu *gpu,
+ 	struct drm_etnaviv_pm_signal *signal)
+ {
+-	const struct etnaviv_pm_domain_meta *meta = &doms_meta[signal->pipe];
++	const unsigned int nr_domains = num_pm_domains(gpu);
+ 	const struct etnaviv_pm_domain *dom;
+ 	const struct etnaviv_pm_signal *sig;
+ 
+-	if (signal->domain >= meta->nr_domains)
++	if (signal->domain >= nr_domains)
+ 		return -EINVAL;
+ 
+-	dom = meta->domains + signal->domain;
++	dom = pm_domain(gpu, signal->domain);
++	if (!dom)
++		return -EINVAL;
+ 
+ 	if (signal->iter >= dom->nr_signals)
+ 		return -EINVAL;
+diff --git a/drivers/gpu/drm/i915/Kconfig.profile b/drivers/gpu/drm/i915/Kconfig.profile
+index c280b6ae38eb..3b39998b7210 100644
+--- a/drivers/gpu/drm/i915/Kconfig.profile
++++ b/drivers/gpu/drm/i915/Kconfig.profile
+@@ -35,6 +35,10 @@ config DRM_I915_PREEMPT_TIMEOUT
+ 
+ 	  May be 0 to disable the timeout.
+ 
++	  The compiled in default may get overridden at driver probe time on
++	  certain platforms and certain engines which will be reflected in the
++	  sysfs control.
++
+ config DRM_I915_SPIN_REQUEST
+ 	int "Busywait for request completion (us)"
+ 	default 5 # microseconds
+diff --git a/drivers/gpu/drm/i915/display/intel_ddi.c b/drivers/gpu/drm/i915/display/intel_ddi.c
+index d9a61f341070..2fe594952748 100644
+--- a/drivers/gpu/drm/i915/display/intel_ddi.c
++++ b/drivers/gpu/drm/i915/display/intel_ddi.c
+@@ -2225,7 +2225,11 @@ static void intel_ddi_get_power_domains(struct intel_encoder *encoder,
+ 		return;
+ 
+ 	dig_port = enc_to_dig_port(encoder);
+-	intel_display_power_get(dev_priv, dig_port->ddi_io_power_domain);
++
++	if (!intel_phy_is_tc(dev_priv, phy) ||
++	    dig_port->tc_mode != TC_PORT_TBT_ALT)
++		intel_display_power_get(dev_priv,
++					dig_port->ddi_io_power_domain);
+ 
+ 	/*
+ 	 * AUX power is only needed for (e)DP mode, and for HDMI mode on TC
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+index 7643a30ba4cd..2b8681bfecc3 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
++++ b/drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
+@@ -914,11 +914,13 @@ static inline struct i915_ggtt *cache_to_ggtt(struct reloc_cache *cache)
+ 
+ static void reloc_gpu_flush(struct reloc_cache *cache)
+ {
+-	GEM_BUG_ON(cache->rq_size >= cache->rq->batch->obj->base.size / sizeof(u32));
++	struct drm_i915_gem_object *obj = cache->rq->batch->obj;
++
++	GEM_BUG_ON(cache->rq_size >= obj->base.size / sizeof(u32));
+ 	cache->rq_cmd[cache->rq_size] = MI_BATCH_BUFFER_END;
+ 
+-	__i915_gem_object_flush_map(cache->rq->batch->obj, 0, cache->rq_size);
+-	i915_gem_object_unpin_map(cache->rq->batch->obj);
++	__i915_gem_object_flush_map(obj, 0, sizeof(u32) * (cache->rq_size + 1));
++	i915_gem_object_unpin_map(obj);
+ 
+ 	intel_gt_chipset_flush(cache->rq->engine->gt);
+ 
+diff --git a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
+index 4d1de2d97d5c..9aabc5815d38 100644
+--- a/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
++++ b/drivers/gpu/drm/i915/gt/gen8_ppgtt.c
+@@ -25,6 +25,30 @@ static u64 gen8_pde_encode(const dma_addr_t addr,
+ 	return pde;
+ }
+ 
++static u64 gen8_pte_encode(dma_addr_t addr,
++			   enum i915_cache_level level,
++			   u32 flags)
++{
++	gen8_pte_t pte = addr | _PAGE_PRESENT | _PAGE_RW;
++
++	if (unlikely(flags & PTE_READ_ONLY))
++		pte &= ~_PAGE_RW;
++
++	switch (level) {
++	case I915_CACHE_NONE:
++		pte |= PPAT_UNCACHED;
++		break;
++	case I915_CACHE_WT:
++		pte |= PPAT_DISPLAY_ELLC;
++		break;
++	default:
++		pte |= PPAT_CACHED;
++		break;
++	}
++
++	return pte;
++}
++
+ static void gen8_ppgtt_notify_vgt(struct i915_ppgtt *ppgtt, bool create)
+ {
+ 	struct drm_i915_private *i915 = ppgtt->vm.i915;
+@@ -706,6 +730,8 @@ struct i915_ppgtt *gen8_ppgtt_create(struct intel_gt *gt)
+ 	ppgtt->vm.allocate_va_range = gen8_ppgtt_alloc;
+ 	ppgtt->vm.clear_range = gen8_ppgtt_clear;
+ 
++	ppgtt->vm.pte_encode = gen8_pte_encode;
++
+ 	if (intel_vgpu_active(gt->i915))
+ 		gen8_ppgtt_notify_vgt(ppgtt, true);
+ 
+diff --git a/drivers/gpu/drm/i915/gt/intel_engine_cs.c b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
+index 06ff7695fa29..4e8ba1dadb02 100644
+--- a/drivers/gpu/drm/i915/gt/intel_engine_cs.c
++++ b/drivers/gpu/drm/i915/gt/intel_engine_cs.c
+@@ -274,6 +274,7 @@ static void intel_engine_sanitize_mmio(struct intel_engine_cs *engine)
+ static int intel_engine_setup(struct intel_gt *gt, enum intel_engine_id id)
+ {
+ 	const struct engine_info *info = &intel_engines[id];
++	struct drm_i915_private *i915 = gt->i915;
+ 	struct intel_engine_cs *engine;
+ 
+ 	BUILD_BUG_ON(MAX_ENGINE_CLASS >= BIT(GEN11_ENGINE_CLASS_WIDTH));
+@@ -300,11 +301,11 @@ static int intel_engine_setup(struct intel_gt *gt, enum intel_engine_id id)
+ 	engine->id = id;
+ 	engine->legacy_idx = INVALID_ENGINE;
+ 	engine->mask = BIT(id);
+-	engine->i915 = gt->i915;
++	engine->i915 = i915;
+ 	engine->gt = gt;
+ 	engine->uncore = gt->uncore;
+ 	engine->hw_id = engine->guc_id = info->hw_id;
+-	engine->mmio_base = __engine_mmio_base(gt->i915, info->mmio_bases);
++	engine->mmio_base = __engine_mmio_base(i915, info->mmio_bases);
+ 
+ 	engine->class = info->class;
+ 	engine->instance = info->instance;
+@@ -319,11 +320,15 @@ static int intel_engine_setup(struct intel_gt *gt, enum intel_engine_id id)
+ 	engine->props.timeslice_duration_ms =
+ 		CONFIG_DRM_I915_TIMESLICE_DURATION;
+ 
++	/* Override to uninterruptible for OpenCL workloads. */
++	if (INTEL_GEN(i915) == 12 && engine->class == RENDER_CLASS)
++		engine->props.preempt_timeout_ms = 0;
++
+ 	engine->context_size = intel_engine_context_size(gt, engine->class);
+ 	if (WARN_ON(engine->context_size > BIT(20)))
+ 		engine->context_size = 0;
+ 	if (engine->context_size)
+-		DRIVER_CAPS(gt->i915)->has_logical_contexts = true;
++		DRIVER_CAPS(i915)->has_logical_contexts = true;
+ 
+ 	/* Nothing to do here, execute in order of dependencies */
+ 	engine->schedule = NULL;
+@@ -339,7 +344,7 @@ static int intel_engine_setup(struct intel_gt *gt, enum intel_engine_id id)
+ 	gt->engine_class[info->class][info->instance] = engine;
+ 	gt->engine[id] = engine;
+ 
+-	gt->i915->engine[id] = engine;
++	i915->engine[id] = engine;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/i915/gt/intel_ggtt.c b/drivers/gpu/drm/i915/gt/intel_ggtt.c
+index 531d501be01f..d0d35c55170f 100644
+--- a/drivers/gpu/drm/i915/gt/intel_ggtt.c
++++ b/drivers/gpu/drm/i915/gt/intel_ggtt.c
+@@ -167,6 +167,13 @@ static void gmch_ggtt_invalidate(struct i915_ggtt *ggtt)
+ 	intel_gtt_chipset_flush();
+ }
+ 
++static u64 gen8_ggtt_pte_encode(dma_addr_t addr,
++				enum i915_cache_level level,
++				u32 flags)
++{
++	return addr | _PAGE_PRESENT;
++}
++
+ static void gen8_set_pte(void __iomem *addr, gen8_pte_t pte)
+ {
+ 	writeq(pte, addr);
+@@ -182,7 +189,7 @@ static void gen8_ggtt_insert_page(struct i915_address_space *vm,
+ 	gen8_pte_t __iomem *pte =
+ 		(gen8_pte_t __iomem *)ggtt->gsm + offset / I915_GTT_PAGE_SIZE;
+ 
+-	gen8_set_pte(pte, gen8_pte_encode(addr, level, 0));
++	gen8_set_pte(pte, gen8_ggtt_pte_encode(addr, level, 0));
+ 
+ 	ggtt->invalidate(ggtt);
+ }
+@@ -192,10 +199,11 @@ static void gen8_ggtt_insert_entries(struct i915_address_space *vm,
+ 				     enum i915_cache_level level,
+ 				     u32 flags)
+ {
++	const gen8_pte_t pte_encode = gen8_ggtt_pte_encode(0, level, 0);
+ 	struct i915_ggtt *ggtt = i915_vm_to_ggtt(vm);
+-	struct sgt_iter sgt_iter;
+-	gen8_pte_t __iomem *gtt_entries;
+-	const gen8_pte_t pte_encode = gen8_pte_encode(0, level, 0);
++	gen8_pte_t __iomem *gte;
++	gen8_pte_t __iomem *end;
++	struct sgt_iter iter;
+ 	dma_addr_t addr;
+ 
+ 	/*
+@@ -203,10 +211,17 @@ static void gen8_ggtt_insert_entries(struct i915_address_space *vm,
+ 	 * not to allow the user to override access to a read only page.
+ 	 */
+ 
+-	gtt_entries = (gen8_pte_t __iomem *)ggtt->gsm;
+-	gtt_entries += vma->node.start / I915_GTT_PAGE_SIZE;
+-	for_each_sgt_daddr(addr, sgt_iter, vma->pages)
+-		gen8_set_pte(gtt_entries++, pte_encode | addr);
++	gte = (gen8_pte_t __iomem *)ggtt->gsm;
++	gte += vma->node.start / I915_GTT_PAGE_SIZE;
++	end = gte + vma->node.size / I915_GTT_PAGE_SIZE;
++
++	for_each_sgt_daddr(addr, iter, vma->pages)
++		gen8_set_pte(gte++, pte_encode | addr);
++	GEM_BUG_ON(gte > end);
++
++	/* Fill the allocated but "unused" space beyond the end of the buffer */
++	while (gte < end)
++		gen8_set_pte(gte++, vm->scratch[0].encode);
+ 
+ 	/*
+ 	 * We want to flush the TLBs only after we're certain all the PTE
+@@ -242,13 +257,22 @@ static void gen6_ggtt_insert_entries(struct i915_address_space *vm,
+ 				     u32 flags)
+ {
+ 	struct i915_ggtt *ggtt = i915_vm_to_ggtt(vm);
+-	gen6_pte_t __iomem *entries = (gen6_pte_t __iomem *)ggtt->gsm;
+-	unsigned int i = vma->node.start / I915_GTT_PAGE_SIZE;
++	gen6_pte_t __iomem *gte;
++	gen6_pte_t __iomem *end;
+ 	struct sgt_iter iter;
+ 	dma_addr_t addr;
+ 
++	gte = (gen6_pte_t __iomem *)ggtt->gsm;
++	gte += vma->node.start / I915_GTT_PAGE_SIZE;
++	end = gte + vma->node.size / I915_GTT_PAGE_SIZE;
++
+ 	for_each_sgt_daddr(addr, iter, vma->pages)
+-		iowrite32(vm->pte_encode(addr, level, flags), &entries[i++]);
++		iowrite32(vm->pte_encode(addr, level, flags), gte++);
++	GEM_BUG_ON(gte > end);
++
++	/* Fill the allocated but "unused" space beyond the end of the buffer */
++	while (gte < end)
++		iowrite32(vm->scratch[0].encode, gte++);
+ 
+ 	/*
+ 	 * We want to flush the TLBs only after we're certain all the PTE
+@@ -890,7 +914,7 @@ static int gen8_gmch_probe(struct i915_ggtt *ggtt)
+ 	ggtt->vm.vma_ops.set_pages   = ggtt_set_pages;
+ 	ggtt->vm.vma_ops.clear_pages = clear_pages;
+ 
+-	ggtt->vm.pte_encode = gen8_pte_encode;
++	ggtt->vm.pte_encode = gen8_ggtt_pte_encode;
+ 
+ 	setup_private_pat(ggtt->vm.gt->uncore);
+ 
+diff --git a/drivers/gpu/drm/i915/gt/intel_gtt.c b/drivers/gpu/drm/i915/gt/intel_gtt.c
+index 16acdc5d6734..f6fcf05d54f3 100644
+--- a/drivers/gpu/drm/i915/gt/intel_gtt.c
++++ b/drivers/gpu/drm/i915/gt/intel_gtt.c
+@@ -454,30 +454,6 @@ void gtt_write_workarounds(struct intel_gt *gt)
+ 	}
+ }
+ 
+-u64 gen8_pte_encode(dma_addr_t addr,
+-		    enum i915_cache_level level,
+-		    u32 flags)
+-{
+-	gen8_pte_t pte = addr | _PAGE_PRESENT | _PAGE_RW;
+-
+-	if (unlikely(flags & PTE_READ_ONLY))
+-		pte &= ~_PAGE_RW;
+-
+-	switch (level) {
+-	case I915_CACHE_NONE:
+-		pte |= PPAT_UNCACHED;
+-		break;
+-	case I915_CACHE_WT:
+-		pte |= PPAT_DISPLAY_ELLC;
+-		break;
+-	default:
+-		pte |= PPAT_CACHED;
+-		break;
+-	}
+-
+-	return pte;
+-}
+-
+ static void tgl_setup_private_ppat(struct intel_uncore *uncore)
+ {
+ 	/* TGL doesn't support LLC or AGE settings */
+diff --git a/drivers/gpu/drm/i915/gt/intel_gtt.h b/drivers/gpu/drm/i915/gt/intel_gtt.h
+index 7da7681c20b1..7db9f3ac9aed 100644
+--- a/drivers/gpu/drm/i915/gt/intel_gtt.h
++++ b/drivers/gpu/drm/i915/gt/intel_gtt.h
+@@ -515,10 +515,6 @@ struct i915_ppgtt *i915_ppgtt_create(struct intel_gt *gt);
+ void i915_gem_suspend_gtt_mappings(struct drm_i915_private *i915);
+ void i915_gem_restore_gtt_mappings(struct drm_i915_private *i915);
+ 
+-u64 gen8_pte_encode(dma_addr_t addr,
+-		    enum i915_cache_level level,
+-		    u32 flags);
+-
+ int setup_page_dma(struct i915_address_space *vm, struct i915_page_dma *p);
+ void cleanup_page_dma(struct i915_address_space *vm, struct i915_page_dma *p);
+ 
+diff --git a/drivers/gpu/drm/i915/gt/intel_rps.c b/drivers/gpu/drm/i915/gt/intel_rps.c
+index d2a3d935d186..b2d245963d9f 100644
+--- a/drivers/gpu/drm/i915/gt/intel_rps.c
++++ b/drivers/gpu/drm/i915/gt/intel_rps.c
+@@ -763,6 +763,19 @@ void intel_rps_park(struct intel_rps *rps)
+ 	intel_uncore_forcewake_get(rps_to_uncore(rps), FORCEWAKE_MEDIA);
+ 	rps_set(rps, rps->idle_freq, false);
+ 	intel_uncore_forcewake_put(rps_to_uncore(rps), FORCEWAKE_MEDIA);
++
++	/*
++	 * Since we will try and restart from the previously requested
++	 * frequency on unparking, treat this idle point as a downclock
++	 * interrupt and reduce the frequency for resume. If we park/unpark
++	 * more frequently than the rps worker can run, we will not respond
++	 * to any EI and never see a change in frequency.
++	 *
++	 * (Note we accommodate Cherryview's limitation of only using an
++	 * even bin by applying it to all.)
++	 */
++	rps->cur_freq =
++		max_t(int, round_down(rps->cur_freq - 1, 2), rps->min_freq);
+ }
+ 
+ void intel_rps_boost(struct i915_request *rq)
+diff --git a/drivers/gpu/drm/vboxvideo/vbox_drv.c b/drivers/gpu/drm/vboxvideo/vbox_drv.c
+index 8512d970a09f..ac8f75db2ecd 100644
+--- a/drivers/gpu/drm/vboxvideo/vbox_drv.c
++++ b/drivers/gpu/drm/vboxvideo/vbox_drv.c
+@@ -41,6 +41,10 @@ static int vbox_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	if (!vbox_check_supported(VBE_DISPI_ID_HGSMI))
+ 		return -ENODEV;
+ 
++	ret = drm_fb_helper_remove_conflicting_pci_framebuffers(pdev, "vboxvideodrmfb");
++	if (ret)
++		return ret;
++
+ 	vbox = kzalloc(sizeof(*vbox), GFP_KERNEL);
+ 	if (!vbox)
+ 		return -ENOMEM;
+diff --git a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_shub.c b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_shub.c
+index eea555617d4a..95ddd19d1aa7 100644
+--- a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_shub.c
++++ b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_shub.c
+@@ -464,9 +464,10 @@ st_lsm6dsx_shub_read_oneshot(struct st_lsm6dsx_sensor *sensor,
+ 
+ 	len = min_t(int, sizeof(data), ch->scan_type.realbits >> 3);
+ 	err = st_lsm6dsx_shub_read(sensor, ch->address, data, len);
++	if (err < 0)
++		return err;
+ 
+-	st_lsm6dsx_shub_set_enable(sensor, false);
+-
++	err = st_lsm6dsx_shub_set_enable(sensor, false);
+ 	if (err < 0)
+ 		return err;
+ 
+diff --git a/drivers/input/serio/i8042-x86ia64io.h b/drivers/input/serio/i8042-x86ia64io.h
+index dc974c288e88..08e919dbeb5d 100644
+--- a/drivers/input/serio/i8042-x86ia64io.h
++++ b/drivers/input/serio/i8042-x86ia64io.h
+@@ -530,6 +530,17 @@ static const struct dmi_system_id __initconst i8042_dmi_nomux_table[] = {
+ 			DMI_MATCH(DMI_PRODUCT_VERSION, "Lenovo LaVie Z"),
+ 		},
+ 	},
++	{
++		/*
++		 * Acer Aspire 5738z
++		 * Touchpad stops working in mux mode when dis- + re-enabled
++		 * with the touchpad enable/disable toggle hotkey
++		 */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Acer"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "Aspire 5738"),
++		},
++	},
+ 	{ }
+ };
+ 
+diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
+index 83b1186ffcad..7c8f65c9c32d 100644
+--- a/drivers/irqchip/irq-gic-v3-its.c
++++ b/drivers/irqchip/irq-gic-v3-its.c
+@@ -2452,6 +2452,10 @@ static bool allocate_vpe_l2_table(int cpu, u32 id)
+ 	if (!gic_rdists->has_rvpeid)
+ 		return true;
+ 
++	/* Skip non-present CPUs */
++	if (!base)
++		return true;
++
+ 	val  = gicr_read_vpropbaser(base + SZ_128K + GICR_VPROPBASER);
+ 
+ 	esz  = FIELD_GET(GICR_VPROPBASER_4_1_ENTRY_SIZE, val) + 1;
+@@ -3675,12 +3679,18 @@ static int its_vpe_set_irqchip_state(struct irq_data *d,
+ 	return 0;
+ }
+ 
++static int its_vpe_retrigger(struct irq_data *d)
++{
++	return !its_vpe_set_irqchip_state(d, IRQCHIP_STATE_PENDING, true);
++}
++
+ static struct irq_chip its_vpe_irq_chip = {
+ 	.name			= "GICv4-vpe",
+ 	.irq_mask		= its_vpe_mask_irq,
+ 	.irq_unmask		= its_vpe_unmask_irq,
+ 	.irq_eoi		= irq_chip_eoi_parent,
+ 	.irq_set_affinity	= its_vpe_set_affinity,
++	.irq_retrigger		= its_vpe_retrigger,
+ 	.irq_set_irqchip_state	= its_vpe_set_irqchip_state,
+ 	.irq_set_vcpu_affinity	= its_vpe_set_vcpu_affinity,
+ };
+diff --git a/drivers/irqchip/irq-versatile-fpga.c b/drivers/irqchip/irq-versatile-fpga.c
+index 928858dada75..f1386733d3bc 100644
+--- a/drivers/irqchip/irq-versatile-fpga.c
++++ b/drivers/irqchip/irq-versatile-fpga.c
+@@ -6,6 +6,7 @@
+ #include <linux/irq.h>
+ #include <linux/io.h>
+ #include <linux/irqchip.h>
++#include <linux/irqchip/chained_irq.h>
+ #include <linux/irqchip/versatile-fpga.h>
+ #include <linux/irqdomain.h>
+ #include <linux/module.h>
+@@ -68,12 +69,16 @@ static void fpga_irq_unmask(struct irq_data *d)
+ 
+ static void fpga_irq_handle(struct irq_desc *desc)
+ {
++	struct irq_chip *chip = irq_desc_get_chip(desc);
+ 	struct fpga_irq_data *f = irq_desc_get_handler_data(desc);
+-	u32 status = readl(f->base + IRQ_STATUS);
++	u32 status;
++
++	chained_irq_enter(chip, desc);
+ 
++	status = readl(f->base + IRQ_STATUS);
+ 	if (status == 0) {
+ 		do_bad_IRQ(desc);
+-		return;
++		goto out;
+ 	}
+ 
+ 	do {
+@@ -82,6 +87,9 @@ static void fpga_irq_handle(struct irq_desc *desc)
+ 		status &= ~(1 << irq);
+ 		generic_handle_irq(irq_find_mapping(f->domain, irq));
+ 	} while (status);
++
++out:
++	chained_irq_exit(chip, desc);
+ }
+ 
+ /*
+@@ -204,6 +212,9 @@ int __init fpga_irq_of_init(struct device_node *node,
+ 	if (of_property_read_u32(node, "valid-mask", &valid_mask))
+ 		valid_mask = 0;
+ 
++	writel(clear_mask, base + IRQ_ENABLE_CLEAR);
++	writel(clear_mask, base + FIQ_ENABLE_CLEAR);
++
+ 	/* Some chips are cascaded from a parent IRQ */
+ 	parent_irq = irq_of_parse_and_map(node, 0);
+ 	if (!parent_irq) {
+@@ -213,9 +224,6 @@ int __init fpga_irq_of_init(struct device_node *node,
+ 
+ 	fpga_irq_init(base, node->name, 0, parent_irq, valid_mask, node);
+ 
+-	writel(clear_mask, base + IRQ_ENABLE_CLEAR);
+-	writel(clear_mask, base + FIQ_ENABLE_CLEAR);
+-
+ 	/*
+ 	 * On Versatile AB/PB, some secondary interrupts have a direct
+ 	 * pass-thru to the primary controller for IRQs 20 and 22-31 which need
+diff --git a/drivers/md/dm-clone-metadata.c b/drivers/md/dm-clone-metadata.c
+index c05b12110456..17712456fa63 100644
+--- a/drivers/md/dm-clone-metadata.c
++++ b/drivers/md/dm-clone-metadata.c
+@@ -656,7 +656,7 @@ bool dm_clone_is_range_hydrated(struct dm_clone_metadata *cmd,
+ 	return (bit >= (start + nr_regions));
+ }
+ 
+-unsigned long dm_clone_nr_of_hydrated_regions(struct dm_clone_metadata *cmd)
++unsigned int dm_clone_nr_of_hydrated_regions(struct dm_clone_metadata *cmd)
+ {
+ 	return bitmap_weight(cmd->region_map, cmd->nr_regions);
+ }
+@@ -850,6 +850,12 @@ int dm_clone_set_region_hydrated(struct dm_clone_metadata *cmd, unsigned long re
+ 	struct dirty_map *dmap;
+ 	unsigned long word, flags;
+ 
++	if (unlikely(region_nr >= cmd->nr_regions)) {
++		DMERR("Region %lu out of range (total number of regions %lu)",
++		      region_nr, cmd->nr_regions);
++		return -ERANGE;
++	}
++
+ 	word = region_nr / BITS_PER_LONG;
+ 
+ 	spin_lock_irqsave(&cmd->bitmap_lock, flags);
+@@ -879,6 +885,13 @@ int dm_clone_cond_set_range(struct dm_clone_metadata *cmd, unsigned long start,
+ 	struct dirty_map *dmap;
+ 	unsigned long word, region_nr;
+ 
++	if (unlikely(start >= cmd->nr_regions || (start + nr_regions) < start ||
++		     (start + nr_regions) > cmd->nr_regions)) {
++		DMERR("Invalid region range: start %lu, nr_regions %lu (total number of regions %lu)",
++		      start, nr_regions, cmd->nr_regions);
++		return -ERANGE;
++	}
++
+ 	spin_lock_irq(&cmd->bitmap_lock);
+ 
+ 	if (cmd->read_only) {
+diff --git a/drivers/md/dm-clone-metadata.h b/drivers/md/dm-clone-metadata.h
+index 14af1ebd853f..d848b8799c07 100644
+--- a/drivers/md/dm-clone-metadata.h
++++ b/drivers/md/dm-clone-metadata.h
+@@ -156,7 +156,7 @@ bool dm_clone_is_range_hydrated(struct dm_clone_metadata *cmd,
+ /*
+  * Returns the number of hydrated regions.
+  */
+-unsigned long dm_clone_nr_of_hydrated_regions(struct dm_clone_metadata *cmd);
++unsigned int dm_clone_nr_of_hydrated_regions(struct dm_clone_metadata *cmd);
+ 
+ /*
+  * Returns the first unhydrated region with region_nr >= @start
+diff --git a/drivers/md/dm-clone-target.c b/drivers/md/dm-clone-target.c
+index d1e1b5b56b1b..5ce96ddf1ce1 100644
+--- a/drivers/md/dm-clone-target.c
++++ b/drivers/md/dm-clone-target.c
+@@ -282,7 +282,7 @@ static bool bio_triggers_commit(struct clone *clone, struct bio *bio)
+ /* Get the address of the region in sectors */
+ static inline sector_t region_to_sector(struct clone *clone, unsigned long region_nr)
+ {
+-	return (region_nr << clone->region_shift);
++	return ((sector_t)region_nr << clone->region_shift);
+ }
+ 
+ /* Get the region number of the bio */
+@@ -293,10 +293,17 @@ static inline unsigned long bio_to_region(struct clone *clone, struct bio *bio)
+ 
+ /* Get the region range covered by the bio */
+ static void bio_region_range(struct clone *clone, struct bio *bio,
+-			     unsigned long *rs, unsigned long *re)
++			     unsigned long *rs, unsigned long *nr_regions)
+ {
++	unsigned long end;
++
+ 	*rs = dm_sector_div_up(bio->bi_iter.bi_sector, clone->region_size);
+-	*re = bio_end_sector(bio) >> clone->region_shift;
++	end = bio_end_sector(bio) >> clone->region_shift;
++
++	if (*rs >= end)
++		*nr_regions = 0;
++	else
++		*nr_regions = end - *rs;
+ }
+ 
+ /* Check whether a bio overwrites a region */
+@@ -454,7 +461,7 @@ static void trim_bio(struct bio *bio, sector_t sector, unsigned int len)
+ 
+ static void complete_discard_bio(struct clone *clone, struct bio *bio, bool success)
+ {
+-	unsigned long rs, re;
++	unsigned long rs, nr_regions;
+ 
+ 	/*
+ 	 * If the destination device supports discards, remap and trim the
+@@ -463,9 +470,9 @@ static void complete_discard_bio(struct clone *clone, struct bio *bio, bool succ
+ 	 */
+ 	if (test_bit(DM_CLONE_DISCARD_PASSDOWN, &clone->flags) && success) {
+ 		remap_to_dest(clone, bio);
+-		bio_region_range(clone, bio, &rs, &re);
+-		trim_bio(bio, rs << clone->region_shift,
+-			 (re - rs) << clone->region_shift);
++		bio_region_range(clone, bio, &rs, &nr_regions);
++		trim_bio(bio, region_to_sector(clone, rs),
++			 nr_regions << clone->region_shift);
+ 		generic_make_request(bio);
+ 	} else
+ 		bio_endio(bio);
+@@ -473,12 +480,21 @@ static void complete_discard_bio(struct clone *clone, struct bio *bio, bool succ
+ 
+ static void process_discard_bio(struct clone *clone, struct bio *bio)
+ {
+-	unsigned long rs, re;
++	unsigned long rs, nr_regions;
+ 
+-	bio_region_range(clone, bio, &rs, &re);
+-	BUG_ON(re > clone->nr_regions);
++	bio_region_range(clone, bio, &rs, &nr_regions);
++	if (!nr_regions) {
++		bio_endio(bio);
++		return;
++	}
+ 
+-	if (unlikely(rs == re)) {
++	if (WARN_ON(rs >= clone->nr_regions || (rs + nr_regions) < rs ||
++		    (rs + nr_regions) > clone->nr_regions)) {
++		DMERR("%s: Invalid range (%lu + %lu, total regions %lu) for discard (%llu + %u)",
++		      clone_device_name(clone), rs, nr_regions,
++		      clone->nr_regions,
++		      (unsigned long long)bio->bi_iter.bi_sector,
++		      bio_sectors(bio));
+ 		bio_endio(bio);
+ 		return;
+ 	}
+@@ -487,7 +503,7 @@ static void process_discard_bio(struct clone *clone, struct bio *bio)
+ 	 * The covered regions are already hydrated so we just need to pass
+ 	 * down the discard.
+ 	 */
+-	if (dm_clone_is_range_hydrated(clone->cmd, rs, re - rs)) {
++	if (dm_clone_is_range_hydrated(clone->cmd, rs, nr_regions)) {
+ 		complete_discard_bio(clone, bio, true);
+ 		return;
+ 	}
+@@ -788,11 +804,14 @@ static void hydration_copy(struct dm_clone_region_hydration *hd, unsigned int nr
+ 	struct dm_io_region from, to;
+ 	struct clone *clone = hd->clone;
+ 
++	if (WARN_ON(!nr_regions))
++		return;
++
+ 	region_size = clone->region_size;
+ 	region_start = hd->region_nr;
+ 	region_end = region_start + nr_regions - 1;
+ 
+-	total_size = (nr_regions - 1) << clone->region_shift;
++	total_size = region_to_sector(clone, nr_regions - 1);
+ 
+ 	if (region_end == clone->nr_regions - 1) {
+ 		/*
+@@ -1169,7 +1188,7 @@ static void process_deferred_discards(struct clone *clone)
+ 	int r = -EPERM;
+ 	struct bio *bio;
+ 	struct blk_plug plug;
+-	unsigned long rs, re;
++	unsigned long rs, nr_regions;
+ 	struct bio_list discards = BIO_EMPTY_LIST;
+ 
+ 	spin_lock_irq(&clone->lock);
+@@ -1185,14 +1204,13 @@ static void process_deferred_discards(struct clone *clone)
+ 
+ 	/* Update the metadata */
+ 	bio_list_for_each(bio, &discards) {
+-		bio_region_range(clone, bio, &rs, &re);
++		bio_region_range(clone, bio, &rs, &nr_regions);
+ 		/*
+ 		 * A discard request might cover regions that have been already
+ 		 * hydrated. There is no need to update the metadata for these
+ 		 * regions.
+ 		 */
+-		r = dm_clone_cond_set_range(clone->cmd, rs, re - rs);
+-
++		r = dm_clone_cond_set_range(clone->cmd, rs, nr_regions);
+ 		if (unlikely(r))
+ 			break;
+ 	}
+@@ -1455,7 +1473,7 @@ static void clone_status(struct dm_target *ti, status_type_t type,
+ 			goto error;
+ 		}
+ 
+-		DMEMIT("%u %llu/%llu %llu %lu/%lu %u ",
++		DMEMIT("%u %llu/%llu %llu %u/%lu %u ",
+ 		       DM_CLONE_METADATA_BLOCK_SIZE,
+ 		       (unsigned long long)(nr_metadata_blocks - nr_free_metadata_blocks),
+ 		       (unsigned long long)nr_metadata_blocks,
+@@ -1775,6 +1793,7 @@ error:
+ static int clone_ctr(struct dm_target *ti, unsigned int argc, char **argv)
+ {
+ 	int r;
++	sector_t nr_regions;
+ 	struct clone *clone;
+ 	struct dm_arg_set as;
+ 
+@@ -1816,7 +1835,16 @@ static int clone_ctr(struct dm_target *ti, unsigned int argc, char **argv)
+ 		goto out_with_source_dev;
+ 
+ 	clone->region_shift = __ffs(clone->region_size);
+-	clone->nr_regions = dm_sector_div_up(ti->len, clone->region_size);
++	nr_regions = dm_sector_div_up(ti->len, clone->region_size);
++
++	/* Check for overflow */
++	if (nr_regions != (unsigned long)nr_regions) {
++		ti->error = "Too many regions. Consider increasing the region size";
++		r = -EOVERFLOW;
++		goto out_with_source_dev;
++	}
++
++	clone->nr_regions = nr_regions;
+ 
+ 	r = validate_nr_regions(clone->nr_regions, &ti->error);
+ 	if (r)
+diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
+index 2f03fecd312d..fc0c5f4f6b70 100644
+--- a/drivers/md/dm-integrity.c
++++ b/drivers/md/dm-integrity.c
+@@ -1519,7 +1519,7 @@ static void integrity_metadata(struct work_struct *w)
+ 		struct bio *bio = dm_bio_from_per_bio_data(dio, sizeof(struct dm_integrity_io));
+ 		char *checksums;
+ 		unsigned extra_space = unlikely(digest_size > ic->tag_size) ? digest_size - ic->tag_size : 0;
+-		char checksums_onstack[HASH_MAX_DIGESTSIZE];
++		char checksums_onstack[max((size_t)HASH_MAX_DIGESTSIZE, MAX_TAG_SIZE)];
+ 		unsigned sectors_to_process = dio->range.n_sectors;
+ 		sector_t sector = dio->range.logical_sector;
+ 
+@@ -1748,7 +1748,7 @@ retry_kmap:
+ 				} while (++s < ic->sectors_per_block);
+ #ifdef INTERNAL_VERIFY
+ 				if (ic->internal_hash) {
+-					char checksums_onstack[max(HASH_MAX_DIGESTSIZE, MAX_TAG_SIZE)];
++					char checksums_onstack[max((size_t)HASH_MAX_DIGESTSIZE, MAX_TAG_SIZE)];
+ 
+ 					integrity_sector_checksum(ic, logical_sector, mem + bv.bv_offset, checksums_onstack);
+ 					if (unlikely(memcmp(checksums_onstack, journal_entry_tag(ic, je), ic->tag_size))) {
+diff --git a/drivers/md/dm-verity-fec.c b/drivers/md/dm-verity-fec.c
+index 3ceeb6b404ed..49147e634046 100644
+--- a/drivers/md/dm-verity-fec.c
++++ b/drivers/md/dm-verity-fec.c
+@@ -551,6 +551,7 @@ void verity_fec_dtr(struct dm_verity *v)
+ 	mempool_exit(&f->rs_pool);
+ 	mempool_exit(&f->prealloc_pool);
+ 	mempool_exit(&f->extra_pool);
++	mempool_exit(&f->output_pool);
+ 	kmem_cache_destroy(f->cache);
+ 
+ 	if (f->data_bufio)
+diff --git a/drivers/md/dm-writecache.c b/drivers/md/dm-writecache.c
+index a09bdc000e64..d3b17a654917 100644
+--- a/drivers/md/dm-writecache.c
++++ b/drivers/md/dm-writecache.c
+@@ -876,6 +876,7 @@ static int writecache_alloc_entries(struct dm_writecache *wc)
+ 		struct wc_entry *e = &wc->entries[b];
+ 		e->index = b;
+ 		e->write_in_progress = false;
++		cond_resched();
+ 	}
+ 
+ 	return 0;
+@@ -930,6 +931,7 @@ static void writecache_resume(struct dm_target *ti)
+ 			e->original_sector = le64_to_cpu(wme.original_sector);
+ 			e->seq_count = le64_to_cpu(wme.seq_count);
+ 		}
++		cond_resched();
+ 	}
+ #endif
+ 	for (b = 0; b < wc->n_blocks; b++) {
+@@ -1791,8 +1793,10 @@ static int init_memory(struct dm_writecache *wc)
+ 	pmem_assign(sb(wc)->n_blocks, cpu_to_le64(wc->n_blocks));
+ 	pmem_assign(sb(wc)->seq_count, cpu_to_le64(0));
+ 
+-	for (b = 0; b < wc->n_blocks; b++)
++	for (b = 0; b < wc->n_blocks; b++) {
+ 		write_original_sector_seq_count(wc, &wc->entries[b], -1, -1);
++		cond_resched();
++	}
+ 
+ 	writecache_flush_all_metadata(wc);
+ 	writecache_commit_flushed(wc, false);
+diff --git a/drivers/md/dm-zoned-metadata.c b/drivers/md/dm-zoned-metadata.c
+index 516c7b671d25..369de15c4e80 100644
+--- a/drivers/md/dm-zoned-metadata.c
++++ b/drivers/md/dm-zoned-metadata.c
+@@ -1109,7 +1109,6 @@ static int dmz_init_zone(struct blk_zone *blkz, unsigned int idx, void *data)
+ 	switch (blkz->type) {
+ 	case BLK_ZONE_TYPE_CONVENTIONAL:
+ 		set_bit(DMZ_RND, &zone->flags);
+-		zmd->nr_rnd_zones++;
+ 		break;
+ 	case BLK_ZONE_TYPE_SEQWRITE_REQ:
+ 	case BLK_ZONE_TYPE_SEQWRITE_PREF:
+diff --git a/drivers/md/md.c b/drivers/md/md.c
+index 469f551863be..0b30ada971c1 100644
+--- a/drivers/md/md.c
++++ b/drivers/md/md.c
+@@ -6184,7 +6184,7 @@ EXPORT_SYMBOL_GPL(md_stop_writes);
+ static void mddev_detach(struct mddev *mddev)
+ {
+ 	md_bitmap_wait_behind_writes(mddev);
+-	if (mddev->pers && mddev->pers->quiesce) {
++	if (mddev->pers && mddev->pers->quiesce && !mddev->suspended) {
+ 		mddev->pers->quiesce(mddev, 1);
+ 		mddev->pers->quiesce(mddev, 0);
+ 	}
+diff --git a/drivers/media/i2c/ov5695.c b/drivers/media/i2c/ov5695.c
+index d6cd15bb699a..cc678d9d2e0d 100644
+--- a/drivers/media/i2c/ov5695.c
++++ b/drivers/media/i2c/ov5695.c
+@@ -971,16 +971,9 @@ unlock_and_return:
+ 	return ret;
+ }
+ 
+-/* Calculate the delay in us by clock rate and clock cycles */
+-static inline u32 ov5695_cal_delay(u32 cycles)
+-{
+-	return DIV_ROUND_UP(cycles, OV5695_XVCLK_FREQ / 1000 / 1000);
+-}
+-
+ static int __ov5695_power_on(struct ov5695 *ov5695)
+ {
+-	int ret;
+-	u32 delay_us;
++	int i, ret;
+ 	struct device *dev = &ov5695->client->dev;
+ 
+ 	ret = clk_prepare_enable(ov5695->xvclk);
+@@ -991,21 +984,28 @@ static int __ov5695_power_on(struct ov5695 *ov5695)
+ 
+ 	gpiod_set_value_cansleep(ov5695->reset_gpio, 1);
+ 
+-	ret = regulator_bulk_enable(OV5695_NUM_SUPPLIES, ov5695->supplies);
+-	if (ret < 0) {
+-		dev_err(dev, "Failed to enable regulators\n");
+-		goto disable_clk;
++	/*
++	 * The hardware requires the regulators to be powered on in order,
++	 * so enable them one by one.
++	 */
++	for (i = 0; i < OV5695_NUM_SUPPLIES; i++) {
++		ret = regulator_enable(ov5695->supplies[i].consumer);
++		if (ret) {
++			dev_err(dev, "Failed to enable %s: %d\n",
++				ov5695->supplies[i].supply, ret);
++			goto disable_reg_clk;
++		}
+ 	}
+ 
+ 	gpiod_set_value_cansleep(ov5695->reset_gpio, 0);
+ 
+-	/* 8192 cycles prior to first SCCB transaction */
+-	delay_us = ov5695_cal_delay(8192);
+-	usleep_range(delay_us, delay_us * 2);
++	usleep_range(1000, 1200);
+ 
+ 	return 0;
+ 
+-disable_clk:
++disable_reg_clk:
++	for (--i; i >= 0; i--)
++		regulator_disable(ov5695->supplies[i].consumer);
+ 	clk_disable_unprepare(ov5695->xvclk);
+ 
+ 	return ret;
+@@ -1013,9 +1013,22 @@ disable_clk:
+ 
+ static void __ov5695_power_off(struct ov5695 *ov5695)
+ {
++	struct device *dev = &ov5695->client->dev;
++	int i, ret;
++
+ 	clk_disable_unprepare(ov5695->xvclk);
+ 	gpiod_set_value_cansleep(ov5695->reset_gpio, 1);
+-	regulator_bulk_disable(OV5695_NUM_SUPPLIES, ov5695->supplies);
++
++	/*
++	 * The hardware requires the regulators to be powered off in order,
++	 * so disable them one by one.
++	 */
++	for (i = OV5695_NUM_SUPPLIES - 1; i >= 0; i--) {
++		ret = regulator_disable(ov5695->supplies[i].consumer);
++		if (ret)
++			dev_err(dev, "Failed to disable %s: %d\n",
++				ov5695->supplies[i].supply, ret);
++	}
+ }
+ 
+ static int __maybe_unused ov5695_runtime_resume(struct device *dev)
+@@ -1285,7 +1298,7 @@ static int ov5695_probe(struct i2c_client *client,
+ 	if (clk_get_rate(ov5695->xvclk) != OV5695_XVCLK_FREQ)
+ 		dev_warn(dev, "xvclk mismatched, modes are based on 24MHz\n");
+ 
+-	ov5695->reset_gpio = devm_gpiod_get(dev, "reset", GPIOD_OUT_LOW);
++	ov5695->reset_gpio = devm_gpiod_get(dev, "reset", GPIOD_OUT_HIGH);
+ 	if (IS_ERR(ov5695->reset_gpio)) {
+ 		dev_err(dev, "Failed to get reset-gpios\n");
+ 		return -EINVAL;
+diff --git a/drivers/media/i2c/video-i2c.c b/drivers/media/i2c/video-i2c.c
+index 078141712c88..0b977e73ceb2 100644
+--- a/drivers/media/i2c/video-i2c.c
++++ b/drivers/media/i2c/video-i2c.c
+@@ -255,7 +255,7 @@ static int amg88xx_set_power(struct video_i2c_data *data, bool on)
+ 	return amg88xx_set_power_off(data);
+ }
+ 
+-#if IS_ENABLED(CONFIG_HWMON)
++#if IS_REACHABLE(CONFIG_HWMON)
+ 
+ static const u32 amg88xx_temp_config[] = {
+ 	HWMON_T_INPUT,
+diff --git a/drivers/media/platform/mtk-mdp/mtk_mdp_vpu.c b/drivers/media/platform/mtk-mdp/mtk_mdp_vpu.c
+index 6720d11f50cf..b065ccd06914 100644
+--- a/drivers/media/platform/mtk-mdp/mtk_mdp_vpu.c
++++ b/drivers/media/platform/mtk-mdp/mtk_mdp_vpu.c
+@@ -15,7 +15,7 @@ static inline struct mtk_mdp_ctx *vpu_to_ctx(struct mtk_mdp_vpu *vpu)
+ 	return container_of(vpu, struct mtk_mdp_ctx, vpu);
+ }
+ 
+-static void mtk_mdp_vpu_handle_init_ack(struct mdp_ipi_comm_ack *msg)
++static void mtk_mdp_vpu_handle_init_ack(const struct mdp_ipi_comm_ack *msg)
+ {
+ 	struct mtk_mdp_vpu *vpu = (struct mtk_mdp_vpu *)
+ 					(unsigned long)msg->ap_inst;
+@@ -26,10 +26,11 @@ static void mtk_mdp_vpu_handle_init_ack(struct mdp_ipi_comm_ack *msg)
+ 	vpu->inst_addr = msg->vpu_inst_addr;
+ }
+ 
+-static void mtk_mdp_vpu_ipi_handler(void *data, unsigned int len, void *priv)
++static void mtk_mdp_vpu_ipi_handler(const void *data, unsigned int len,
++				    void *priv)
+ {
+-	unsigned int msg_id = *(unsigned int *)data;
+-	struct mdp_ipi_comm_ack *msg = (struct mdp_ipi_comm_ack *)data;
++	const struct mdp_ipi_comm_ack *msg = data;
++	unsigned int msg_id = msg->msg_id;
+ 	struct mtk_mdp_vpu *vpu = (struct mtk_mdp_vpu *)
+ 					(unsigned long)msg->ap_inst;
+ 	struct mtk_mdp_ctx *ctx;
+diff --git a/drivers/media/platform/mtk-vcodec/vdec_vpu_if.c b/drivers/media/platform/mtk-vcodec/vdec_vpu_if.c
+index 70abfd4cd4b9..948a12fd9d46 100644
+--- a/drivers/media/platform/mtk-vcodec/vdec_vpu_if.c
++++ b/drivers/media/platform/mtk-vcodec/vdec_vpu_if.c
+@@ -9,7 +9,7 @@
+ #include "vdec_ipi_msg.h"
+ #include "vdec_vpu_if.h"
+ 
+-static void handle_init_ack_msg(struct vdec_vpu_ipi_init_ack *msg)
++static void handle_init_ack_msg(const struct vdec_vpu_ipi_init_ack *msg)
+ {
+ 	struct vdec_vpu_inst *vpu = (struct vdec_vpu_inst *)
+ 					(unsigned long)msg->ap_inst_addr;
+@@ -34,9 +34,9 @@ static void handle_init_ack_msg(struct vdec_vpu_ipi_init_ack *msg)
+  * This function runs in interrupt context and it means there's an IPI MSG
+  * from VPU.
+  */
+-static void vpu_dec_ipi_handler(void *data, unsigned int len, void *priv)
++static void vpu_dec_ipi_handler(const void *data, unsigned int len, void *priv)
+ {
+-	struct vdec_vpu_ipi_ack *msg = data;
++	const struct vdec_vpu_ipi_ack *msg = data;
+ 	struct vdec_vpu_inst *vpu = (struct vdec_vpu_inst *)
+ 					(unsigned long)msg->ap_inst_addr;
+ 
+diff --git a/drivers/media/platform/mtk-vcodec/venc_vpu_if.c b/drivers/media/platform/mtk-vcodec/venc_vpu_if.c
+index 3e931b0ed096..9540709c1905 100644
+--- a/drivers/media/platform/mtk-vcodec/venc_vpu_if.c
++++ b/drivers/media/platform/mtk-vcodec/venc_vpu_if.c
+@@ -8,26 +8,26 @@
+ #include "venc_ipi_msg.h"
+ #include "venc_vpu_if.h"
+ 
+-static void handle_enc_init_msg(struct venc_vpu_inst *vpu, void *data)
++static void handle_enc_init_msg(struct venc_vpu_inst *vpu, const void *data)
+ {
+-	struct venc_vpu_ipi_msg_init *msg = data;
++	const struct venc_vpu_ipi_msg_init *msg = data;
+ 
+ 	vpu->inst_addr = msg->vpu_inst_addr;
+ 	vpu->vsi = vpu_mapping_dm_addr(vpu->dev, msg->vpu_inst_addr);
+ }
+ 
+-static void handle_enc_encode_msg(struct venc_vpu_inst *vpu, void *data)
++static void handle_enc_encode_msg(struct venc_vpu_inst *vpu, const void *data)
+ {
+-	struct venc_vpu_ipi_msg_enc *msg = data;
++	const struct venc_vpu_ipi_msg_enc *msg = data;
+ 
+ 	vpu->state = msg->state;
+ 	vpu->bs_size = msg->bs_size;
+ 	vpu->is_key_frm = msg->is_key_frm;
+ }
+ 
+-static void vpu_enc_ipi_handler(void *data, unsigned int len, void *priv)
++static void vpu_enc_ipi_handler(const void *data, unsigned int len, void *priv)
+ {
+-	struct venc_vpu_ipi_msg_common *msg = data;
++	const struct venc_vpu_ipi_msg_common *msg = data;
+ 	struct venc_vpu_inst *vpu =
+ 		(struct venc_vpu_inst *)(unsigned long)msg->venc_inst;
+ 
+diff --git a/drivers/media/platform/mtk-vpu/mtk_vpu.c b/drivers/media/platform/mtk-vpu/mtk_vpu.c
+index a768707abb94..2fbccc9b247b 100644
+--- a/drivers/media/platform/mtk-vpu/mtk_vpu.c
++++ b/drivers/media/platform/mtk-vpu/mtk_vpu.c
+@@ -203,8 +203,8 @@ struct mtk_vpu {
+ 	struct vpu_run run;
+ 	struct vpu_wdt wdt;
+ 	struct vpu_ipi_desc ipi_desc[IPI_MAX];
+-	struct share_obj *recv_buf;
+-	struct share_obj *send_buf;
++	struct share_obj __iomem *recv_buf;
++	struct share_obj __iomem *send_buf;
+ 	struct device *dev;
+ 	struct clk *clk;
+ 	bool fw_loaded;
+@@ -292,7 +292,7 @@ int vpu_ipi_send(struct platform_device *pdev,
+ 		 unsigned int len)
+ {
+ 	struct mtk_vpu *vpu = platform_get_drvdata(pdev);
+-	struct share_obj *send_obj = vpu->send_buf;
++	struct share_obj __iomem *send_obj = vpu->send_buf;
+ 	unsigned long timeout;
+ 	int ret = 0;
+ 
+@@ -325,9 +325,9 @@ int vpu_ipi_send(struct platform_device *pdev,
+ 		}
+ 	} while (vpu_cfg_readl(vpu, HOST_TO_VPU));
+ 
+-	memcpy((void *)send_obj->share_buf, buf, len);
+-	send_obj->len = len;
+-	send_obj->id = id;
++	memcpy_toio(send_obj->share_buf, buf, len);
++	writel(len, &send_obj->len);
++	writel(id, &send_obj->id);
+ 
+ 	vpu->ipi_id_ack[id] = false;
+ 	/* send the command to VPU */
+@@ -600,10 +600,10 @@ OUT_LOAD_FW:
+ }
+ EXPORT_SYMBOL_GPL(vpu_load_firmware);
+ 
+-static void vpu_init_ipi_handler(void *data, unsigned int len, void *priv)
++static void vpu_init_ipi_handler(const void *data, unsigned int len, void *priv)
+ {
+-	struct mtk_vpu *vpu = (struct mtk_vpu *)priv;
+-	struct vpu_run *run = (struct vpu_run *)data;
++	struct mtk_vpu *vpu = priv;
++	const struct vpu_run *run = data;
+ 
+ 	vpu->run.signaled = run->signaled;
+ 	strscpy(vpu->run.fw_ver, run->fw_ver, sizeof(vpu->run.fw_ver));
+@@ -700,19 +700,21 @@ static int vpu_alloc_ext_mem(struct mtk_vpu *vpu, u32 fw_type)
+ 
+ static void vpu_ipi_handler(struct mtk_vpu *vpu)
+ {
+-	struct share_obj *rcv_obj = vpu->recv_buf;
++	struct share_obj __iomem *rcv_obj = vpu->recv_buf;
+ 	struct vpu_ipi_desc *ipi_desc = vpu->ipi_desc;
+-
+-	if (rcv_obj->id < IPI_MAX && ipi_desc[rcv_obj->id].handler) {
+-		ipi_desc[rcv_obj->id].handler(rcv_obj->share_buf,
+-					      rcv_obj->len,
+-					      ipi_desc[rcv_obj->id].priv);
+-		if (rcv_obj->id > IPI_VPU_INIT) {
+-			vpu->ipi_id_ack[rcv_obj->id] = true;
++	unsigned char data[SHARE_BUF_SIZE];
++	s32 id = readl(&rcv_obj->id);
++
++	memcpy_fromio(data, rcv_obj->share_buf, sizeof(data));
++	if (id < IPI_MAX && ipi_desc[id].handler) {
++		ipi_desc[id].handler(data, readl(&rcv_obj->len),
++				     ipi_desc[id].priv);
++		if (id > IPI_VPU_INIT) {
++			vpu->ipi_id_ack[id] = true;
+ 			wake_up(&vpu->ack_wq);
+ 		}
+ 	} else {
+-		dev_err(vpu->dev, "No such ipi id = %d\n", rcv_obj->id);
++		dev_err(vpu->dev, "No such ipi id = %d\n", id);
+ 	}
+ }
+ 
+@@ -722,11 +724,10 @@ static int vpu_ipi_init(struct mtk_vpu *vpu)
+ 	vpu_cfg_writel(vpu, 0x0, VPU_TO_HOST);
+ 
+ 	/* shared buffer initialization */
+-	vpu->recv_buf = (__force struct share_obj *)(vpu->reg.tcm +
+-						     VPU_DTCM_OFFSET);
++	vpu->recv_buf = vpu->reg.tcm + VPU_DTCM_OFFSET;
+ 	vpu->send_buf = vpu->recv_buf + 1;
+-	memset(vpu->recv_buf, 0, sizeof(struct share_obj));
+-	memset(vpu->send_buf, 0, sizeof(struct share_obj));
++	memset_io(vpu->recv_buf, 0, sizeof(struct share_obj));
++	memset_io(vpu->send_buf, 0, sizeof(struct share_obj));
+ 
+ 	return 0;
+ }
+diff --git a/drivers/media/platform/mtk-vpu/mtk_vpu.h b/drivers/media/platform/mtk-vpu/mtk_vpu.h
+index d4453b4bcee9..ee7c552ce928 100644
+--- a/drivers/media/platform/mtk-vpu/mtk_vpu.h
++++ b/drivers/media/platform/mtk-vpu/mtk_vpu.h
+@@ -15,7 +15,7 @@
+  * VPU interfaces with other blocks by share memory and interrupt.
+  **/
+ 
+-typedef void (*ipi_handler_t) (void *data,
++typedef void (*ipi_handler_t) (const void *data,
+ 			       unsigned int len,
+ 			       void *priv);
+ 
+diff --git a/drivers/media/platform/qcom/venus/core.h b/drivers/media/platform/qcom/venus/core.h
+index 11585fb3cae3..2f661af7f873 100644
+--- a/drivers/media/platform/qcom/venus/core.h
++++ b/drivers/media/platform/qcom/venus/core.h
+@@ -344,6 +344,7 @@ struct venus_inst {
+ 	unsigned int subscriptions;
+ 	int buf_count;
+ 	struct venus_ts_metadata tss[VIDEO_MAX_FRAME];
++	unsigned long payloads[VIDEO_MAX_FRAME];
+ 	u64 fps;
+ 	struct v4l2_fract timeperframe;
+ 	const struct venus_format *fmt_out;
+diff --git a/drivers/media/platform/qcom/venus/firmware.c b/drivers/media/platform/qcom/venus/firmware.c
+index d3d1748a7ef6..33f70e1def94 100644
+--- a/drivers/media/platform/qcom/venus/firmware.c
++++ b/drivers/media/platform/qcom/venus/firmware.c
+@@ -44,8 +44,14 @@ static void venus_reset_cpu(struct venus_core *core)
+ 
+ int venus_set_hw_state(struct venus_core *core, bool resume)
+ {
+-	if (core->use_tz)
+-		return qcom_scm_set_remote_state(resume, 0);
++	int ret;
++
++	if (core->use_tz) {
++		ret = qcom_scm_set_remote_state(resume, 0);
++		if (resume && ret == -EINVAL)
++			ret = 0;
++		return ret;
++	}
+ 
+ 	if (resume)
+ 		venus_reset_cpu(core);
+diff --git a/drivers/media/platform/qcom/venus/helpers.c b/drivers/media/platform/qcom/venus/helpers.c
+index a172f1ac0b35..32f8fb8d7f33 100644
+--- a/drivers/media/platform/qcom/venus/helpers.c
++++ b/drivers/media/platform/qcom/venus/helpers.c
+@@ -544,18 +544,13 @@ static int scale_clocks_v4(struct venus_inst *inst)
+ 	struct venus_core *core = inst->core;
+ 	const struct freq_tbl *table = core->res->freq_tbl;
+ 	unsigned int num_rows = core->res->freq_tbl_size;
+-	struct v4l2_m2m_ctx *m2m_ctx = inst->m2m_ctx;
+ 	struct device *dev = core->dev;
+ 	unsigned long freq = 0, freq_core1 = 0, freq_core2 = 0;
+ 	unsigned long filled_len = 0;
+-	struct venus_buffer *buf, *n;
+-	struct vb2_buffer *vb;
+ 	int i, ret;
+ 
+-	v4l2_m2m_for_each_src_buf_safe(m2m_ctx, buf, n) {
+-		vb = &buf->vb.vb2_buf;
+-		filled_len = max(filled_len, vb2_get_plane_payload(vb, 0));
+-	}
++	for (i = 0; i < inst->num_input_bufs; i++)
++		filled_len = max(filled_len, inst->payloads[i]);
+ 
+ 	if (inst->session_type == VIDC_SESSION_TYPE_DEC && !filled_len)
+ 		return 0;
+@@ -1289,6 +1284,15 @@ int venus_helper_vb2_buf_prepare(struct vb2_buffer *vb)
+ }
+ EXPORT_SYMBOL_GPL(venus_helper_vb2_buf_prepare);
+ 
++static void cache_payload(struct venus_inst *inst, struct vb2_buffer *vb)
++{
++	struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
++	unsigned int idx = vbuf->vb2_buf.index;
++
++	if (vbuf->vb2_buf.type == V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE)
++		inst->payloads[idx] = vb2_get_plane_payload(vb, 0);
++}
++
+ void venus_helper_vb2_buf_queue(struct vb2_buffer *vb)
+ {
+ 	struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
+@@ -1300,6 +1304,8 @@ void venus_helper_vb2_buf_queue(struct vb2_buffer *vb)
+ 
+ 	v4l2_m2m_buf_queue(m2m_ctx, vbuf);
+ 
++	cache_payload(inst, vb);
++
+ 	if (inst->session_type == VIDC_SESSION_TYPE_ENC &&
+ 	    !(inst->streamon_out && inst->streamon_cap))
+ 		goto unlock;
+diff --git a/drivers/media/platform/qcom/venus/hfi_parser.c b/drivers/media/platform/qcom/venus/hfi_parser.c
+index 2293d936e49c..7f515a4b9bd1 100644
+--- a/drivers/media/platform/qcom/venus/hfi_parser.c
++++ b/drivers/media/platform/qcom/venus/hfi_parser.c
+@@ -181,6 +181,7 @@ static void parse_codecs(struct venus_core *core, void *data)
+ 	if (IS_V1(core)) {
+ 		core->dec_codecs &= ~HFI_VIDEO_CODEC_HEVC;
+ 		core->dec_codecs &= ~HFI_VIDEO_CODEC_SPARK;
++		core->enc_codecs &= ~HFI_VIDEO_CODEC_HEVC;
+ 	}
+ }
+ 
+diff --git a/drivers/media/platform/ti-vpe/cal.c b/drivers/media/platform/ti-vpe/cal.c
+index be54806180a5..3d3535ff5c5a 100644
+--- a/drivers/media/platform/ti-vpe/cal.c
++++ b/drivers/media/platform/ti-vpe/cal.c
+@@ -372,8 +372,6 @@ struct cal_ctx {
+ 	struct v4l2_subdev	*sensor;
+ 	struct v4l2_fwnode_endpoint	endpoint;
+ 
+-	struct v4l2_async_subdev asd;
+-
+ 	struct v4l2_fh		fh;
+ 	struct cal_dev		*dev;
+ 	struct cc_data		*cc;
+@@ -722,16 +720,16 @@ static void enable_irqs(struct cal_ctx *ctx)
+ 
+ static void disable_irqs(struct cal_ctx *ctx)
+ {
++	u32 val;
++
+ 	/* Disable IRQ_WDMA_END 0/1 */
+-	reg_write_field(ctx->dev,
+-			CAL_HL_IRQENABLE_CLR(2),
+-			CAL_HL_IRQ_CLEAR,
+-			CAL_HL_IRQ_MASK(ctx->csi2_port));
++	val = 0;
++	set_field(&val, CAL_HL_IRQ_CLEAR, CAL_HL_IRQ_MASK(ctx->csi2_port));
++	reg_write(ctx->dev, CAL_HL_IRQENABLE_CLR(2), val);
+ 	/* Disable IRQ_WDMA_START 0/1 */
+-	reg_write_field(ctx->dev,
+-			CAL_HL_IRQENABLE_CLR(3),
+-			CAL_HL_IRQ_CLEAR,
+-			CAL_HL_IRQ_MASK(ctx->csi2_port));
++	val = 0;
++	set_field(&val, CAL_HL_IRQ_CLEAR, CAL_HL_IRQ_MASK(ctx->csi2_port));
++	reg_write(ctx->dev, CAL_HL_IRQENABLE_CLR(3), val);
+ 	/* Todo: Add VC_IRQ and CSI2_COMPLEXIO_IRQ handling */
+ 	reg_write(ctx->dev, CAL_CSI2_VC_IRQENABLE(1), 0);
+ }
+@@ -2032,7 +2030,6 @@ static int of_cal_create_instance(struct cal_ctx *ctx, int inst)
+ 
+ 	parent = pdev->dev.of_node;
+ 
+-	asd = &ctx->asd;
+ 	endpoint = &ctx->endpoint;
+ 
+ 	ep_node = NULL;
+@@ -2079,8 +2076,6 @@ static int of_cal_create_instance(struct cal_ctx *ctx, int inst)
+ 		ctx_dbg(3, ctx, "can't get remote parent\n");
+ 		goto cleanup_exit;
+ 	}
+-	asd->match_type = V4L2_ASYNC_MATCH_FWNODE;
+-	asd->match.fwnode = of_fwnode_handle(sensor_node);
+ 
+ 	v4l2_fwnode_endpoint_parse(of_fwnode_handle(ep_node), endpoint);
+ 
+@@ -2110,9 +2105,17 @@ static int of_cal_create_instance(struct cal_ctx *ctx, int inst)
+ 
+ 	v4l2_async_notifier_init(&ctx->notifier);
+ 
++	asd = kzalloc(sizeof(*asd), GFP_KERNEL);
++	if (!asd)
++		goto cleanup_exit;
++
++	asd->match_type = V4L2_ASYNC_MATCH_FWNODE;
++	asd->match.fwnode = of_fwnode_handle(sensor_node);
++
+ 	ret = v4l2_async_notifier_add_subdev(&ctx->notifier, asd);
+ 	if (ret) {
+ 		ctx_err(ctx, "Error adding asd\n");
++		kfree(asd);
+ 		goto cleanup_exit;
+ 	}
+ 
+diff --git a/drivers/media/platform/vimc/vimc-streamer.c b/drivers/media/platform/vimc/vimc-streamer.c
+index cd6b55433c9e..43e494df61d8 100644
+--- a/drivers/media/platform/vimc/vimc-streamer.c
++++ b/drivers/media/platform/vimc/vimc-streamer.c
+@@ -207,8 +207,13 @@ int vimc_streamer_s_stream(struct vimc_stream *stream,
+ 		stream->kthread = kthread_run(vimc_streamer_thread, stream,
+ 					      "vimc-streamer thread");
+ 
+-		if (IS_ERR(stream->kthread))
+-			return PTR_ERR(stream->kthread);
++		if (IS_ERR(stream->kthread)) {
++			ret = PTR_ERR(stream->kthread);
++			dev_err(ved->dev, "kthread_run failed with %d\n", ret);
++			vimc_streamer_pipeline_terminate(stream);
++			stream->kthread = NULL;
++			return ret;
++		}
+ 
+ 	} else {
+ 		if (!stream->kthread)
+diff --git a/drivers/media/rc/keymaps/Makefile b/drivers/media/rc/keymaps/Makefile
+index 63261ef6380a..aaa1bf81d00d 100644
+--- a/drivers/media/rc/keymaps/Makefile
++++ b/drivers/media/rc/keymaps/Makefile
+@@ -119,6 +119,7 @@ obj-$(CONFIG_RC_MAP) += rc-adstech-dvb-t-pci.o \
+ 			rc-videomate-m1f.o \
+ 			rc-videomate-s350.o \
+ 			rc-videomate-tv-pvr.o \
++			rc-videostrong-kii-pro.o \
+ 			rc-wetek-hub.o \
+ 			rc-wetek-play2.o \
+ 			rc-winfast.o \
+diff --git a/drivers/media/rc/keymaps/rc-videostrong-kii-pro.c b/drivers/media/rc/keymaps/rc-videostrong-kii-pro.c
+new file mode 100644
+index 000000000000..414d4d231e7e
+--- /dev/null
++++ b/drivers/media/rc/keymaps/rc-videostrong-kii-pro.c
+@@ -0,0 +1,83 @@
++// SPDX-License-Identifier: GPL-2.0+
++//
++// Copyright (C) 2019 Mohammad Rasim <mohammad.rasim96@gmail.com>
++
++#include <media/rc-map.h>
++#include <linux/module.h>
++
++//
++// Keytable for the Videostrong KII Pro STB remote control
++//
++
++static struct rc_map_table kii_pro[] = {
++	{ 0x59, KEY_POWER },
++	{ 0x19, KEY_MUTE },
++	{ 0x42, KEY_RED },
++	{ 0x40, KEY_GREEN },
++	{ 0x00, KEY_YELLOW },
++	{ 0x03, KEY_BLUE },
++	{ 0x4a, KEY_BACK },
++	{ 0x48, KEY_FORWARD },
++	{ 0x08, KEY_PREVIOUSSONG},
++	{ 0x0b, KEY_NEXTSONG},
++	{ 0x46, KEY_PLAYPAUSE },
++	{ 0x44, KEY_STOP },
++	{ 0x1f, KEY_FAVORITES},	//KEY_F5?
++	{ 0x04, KEY_PVR },
++	{ 0x4d, KEY_EPG },
++	{ 0x02, KEY_INFO },
++	{ 0x09, KEY_SUBTITLE },
++	{ 0x01, KEY_AUDIO },
++	{ 0x0d, KEY_HOMEPAGE },
++	{ 0x11, KEY_TV },	// DTV ?
++	{ 0x06, KEY_UP },
++	{ 0x5a, KEY_LEFT },
++	{ 0x1a, KEY_ENTER },	// KEY_OK ?
++	{ 0x1b, KEY_RIGHT },
++	{ 0x16, KEY_DOWN },
++	{ 0x45, KEY_MENU },
++	{ 0x05, KEY_ESC },
++	{ 0x13, KEY_VOLUMEUP },
++	{ 0x17, KEY_VOLUMEDOWN },
++	{ 0x58, KEY_APPSELECT },
++	{ 0x12, KEY_VENDOR },	// mouse
++	{ 0x55, KEY_PAGEUP },	// KEY_CHANNELUP ?
++	{ 0x15, KEY_PAGEDOWN },	// KEY_CHANNELDOWN ?
++	{ 0x52, KEY_1 },
++	{ 0x50, KEY_2 },
++	{ 0x10, KEY_3 },
++	{ 0x56, KEY_4 },
++	{ 0x54, KEY_5 },
++	{ 0x14, KEY_6 },
++	{ 0x4e, KEY_7 },
++	{ 0x4c, KEY_8 },
++	{ 0x0c, KEY_9 },
++	{ 0x18, KEY_WWW },	// KEY_F7
++	{ 0x0f, KEY_0 },
++	{ 0x51, KEY_BACKSPACE },
++};
++
++static struct rc_map_list kii_pro_map = {
++	.map = {
++		.scan     = kii_pro,
++		.size     = ARRAY_SIZE(kii_pro),
++		.rc_proto = RC_PROTO_NEC,
++		.name     = RC_MAP_KII_PRO,
++	}
++};
++
++static int __init init_rc_map_kii_pro(void)
++{
++	return rc_map_register(&kii_pro_map);
++}
++
++static void __exit exit_rc_map_kii_pro(void)
++{
++	rc_map_unregister(&kii_pro_map);
++}
++
++module_init(init_rc_map_kii_pro)
++module_exit(exit_rc_map_kii_pro)
++
++MODULE_LICENSE("GPL");
++MODULE_AUTHOR("Mohammad Rasim <mohammad.rasim96@gmail.com>");
+diff --git a/drivers/mfd/dln2.c b/drivers/mfd/dln2.c
+index 7841c11411d0..4faa8d2e5d04 100644
+--- a/drivers/mfd/dln2.c
++++ b/drivers/mfd/dln2.c
+@@ -90,6 +90,11 @@ struct dln2_mod_rx_slots {
+ 	spinlock_t lock;
+ };
+ 
++enum dln2_endpoint {
++	DLN2_EP_OUT	= 0,
++	DLN2_EP_IN	= 1,
++};
++
+ struct dln2_dev {
+ 	struct usb_device *usb_dev;
+ 	struct usb_interface *interface;
+@@ -733,10 +738,10 @@ static int dln2_probe(struct usb_interface *interface,
+ 	    hostif->desc.bNumEndpoints < 2)
+ 		return -ENODEV;
+ 
+-	epin = &hostif->endpoint[0].desc;
+-	epout = &hostif->endpoint[1].desc;
++	epout = &hostif->endpoint[DLN2_EP_OUT].desc;
+ 	if (!usb_endpoint_is_bulk_out(epout))
+ 		return -ENODEV;
++	epin = &hostif->endpoint[DLN2_EP_IN].desc;
+ 	if (!usb_endpoint_is_bulk_in(epin))
+ 		return -ENODEV;
+ 
+diff --git a/drivers/mmc/host/mmci_stm32_sdmmc.c b/drivers/mmc/host/mmci_stm32_sdmmc.c
+index a4f7e8e689d3..01f222758910 100644
+--- a/drivers/mmc/host/mmci_stm32_sdmmc.c
++++ b/drivers/mmc/host/mmci_stm32_sdmmc.c
+@@ -315,11 +315,11 @@ complete:
+ 	if (host->busy_status) {
+ 		writel_relaxed(mask & ~host->variant->busy_detect_mask,
+ 			       base + MMCIMASK0);
+-		writel_relaxed(host->variant->busy_detect_mask,
+-			       base + MMCICLEAR);
+ 		host->busy_status = 0;
+ 	}
+ 
++	writel_relaxed(host->variant->busy_detect_mask, base + MMCICLEAR);
++
+ 	return true;
+ }
+ 
+diff --git a/drivers/mtd/nand/raw/cadence-nand-controller.c b/drivers/mtd/nand/raw/cadence-nand-controller.c
+index f6c7102a1e32..664a8db1ecd7 100644
+--- a/drivers/mtd/nand/raw/cadence-nand-controller.c
++++ b/drivers/mtd/nand/raw/cadence-nand-controller.c
+@@ -997,6 +997,7 @@ static int cadence_nand_cdma_send(struct cdns_nand_ctrl *cdns_ctrl,
+ 		return status;
+ 
+ 	cadence_nand_reset_irq(cdns_ctrl);
++	reinit_completion(&cdns_ctrl->complete);
+ 
+ 	writel_relaxed((u32)cdns_ctrl->dma_cdma_desc,
+ 		       cdns_ctrl->reg + CMD_REG2);
+@@ -2585,7 +2586,7 @@ int cadence_nand_attach_chip(struct nand_chip *chip)
+ {
+ 	struct cdns_nand_ctrl *cdns_ctrl = to_cdns_nand_ctrl(chip->controller);
+ 	struct cdns_nand_chip *cdns_chip = to_cdns_nand_chip(chip);
+-	u32 ecc_size = cdns_chip->sector_count * chip->ecc.bytes;
++	u32 ecc_size;
+ 	struct mtd_info *mtd = nand_to_mtd(chip);
+ 	u32 max_oob_data_size;
+ 	int ret;
+@@ -2603,12 +2604,9 @@ int cadence_nand_attach_chip(struct nand_chip *chip)
+ 	chip->options |= NAND_NO_SUBPAGE_WRITE;
+ 
+ 	cdns_chip->bbm_offs = chip->badblockpos;
+-	if (chip->options & NAND_BUSWIDTH_16) {
+-		cdns_chip->bbm_offs &= ~0x01;
+-		cdns_chip->bbm_len = 2;
+-	} else {
+-		cdns_chip->bbm_len = 1;
+-	}
++	cdns_chip->bbm_offs &= ~0x01;
++	/* this value should be even number */
++	cdns_chip->bbm_len = 2;
+ 
+ 	ret = nand_ecc_choose_conf(chip,
+ 				   &cdns_ctrl->ecc_caps,
+@@ -2625,6 +2623,7 @@ int cadence_nand_attach_chip(struct nand_chip *chip)
+ 	/* Error correction configuration. */
+ 	cdns_chip->sector_size = chip->ecc.size;
+ 	cdns_chip->sector_count = mtd->writesize / cdns_chip->sector_size;
++	ecc_size = cdns_chip->sector_count * chip->ecc.bytes;
+ 
+ 	cdns_chip->avail_oob_size = mtd->oobsize - ecc_size;
+ 
+diff --git a/drivers/mtd/nand/spi/core.c b/drivers/mtd/nand/spi/core.c
+index 89f6beefb01c..5750c45019d8 100644
+--- a/drivers/mtd/nand/spi/core.c
++++ b/drivers/mtd/nand/spi/core.c
+@@ -568,18 +568,18 @@ static int spinand_mtd_write(struct mtd_info *mtd, loff_t to,
+ static bool spinand_isbad(struct nand_device *nand, const struct nand_pos *pos)
+ {
+ 	struct spinand_device *spinand = nand_to_spinand(nand);
++	u8 marker[2] = { };
+ 	struct nand_page_io_req req = {
+ 		.pos = *pos,
+-		.ooblen = 2,
++		.ooblen = sizeof(marker),
+ 		.ooboffs = 0,
+-		.oobbuf.in = spinand->oobbuf,
++		.oobbuf.in = marker,
+ 		.mode = MTD_OPS_RAW,
+ 	};
+ 
+-	memset(spinand->oobbuf, 0, 2);
+ 	spinand_select_target(spinand, pos->target);
+ 	spinand_read_page(spinand, &req, false);
+-	if (spinand->oobbuf[0] != 0xff || spinand->oobbuf[1] != 0xff)
++	if (marker[0] != 0xff || marker[1] != 0xff)
+ 		return true;
+ 
+ 	return false;
+@@ -603,15 +603,15 @@ static int spinand_mtd_block_isbad(struct mtd_info *mtd, loff_t offs)
+ static int spinand_markbad(struct nand_device *nand, const struct nand_pos *pos)
+ {
+ 	struct spinand_device *spinand = nand_to_spinand(nand);
++	u8 marker[2] = { };
+ 	struct nand_page_io_req req = {
+ 		.pos = *pos,
+ 		.ooboffs = 0,
+-		.ooblen = 2,
+-		.oobbuf.out = spinand->oobbuf,
++		.ooblen = sizeof(marker),
++		.oobbuf.out = marker,
+ 	};
+ 	int ret;
+ 
+-	/* Erase block before marking it bad. */
+ 	ret = spinand_select_target(spinand, pos->target);
+ 	if (ret)
+ 		return ret;
+@@ -620,9 +620,6 @@ static int spinand_markbad(struct nand_device *nand, const struct nand_pos *pos)
+ 	if (ret)
+ 		return ret;
+ 
+-	spinand_erase_op(spinand, pos);
+-
+-	memset(spinand->oobbuf, 0, 2);
+ 	return spinand_write_page(spinand, &req);
+ }
+ 
+diff --git a/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c b/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c
+index fbf4cbcf1a65..02cdbb22d335 100644
+--- a/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c
++++ b/drivers/net/ethernet/qualcomm/rmnet/rmnet_config.c
+@@ -279,7 +279,6 @@ static int rmnet_changelink(struct net_device *dev, struct nlattr *tb[],
+ {
+ 	struct rmnet_priv *priv = netdev_priv(dev);
+ 	struct net_device *real_dev;
+-	struct rmnet_endpoint *ep;
+ 	struct rmnet_port *port;
+ 	u16 mux_id;
+ 
+@@ -294,19 +293,27 @@ static int rmnet_changelink(struct net_device *dev, struct nlattr *tb[],
+ 
+ 	if (data[IFLA_RMNET_MUX_ID]) {
+ 		mux_id = nla_get_u16(data[IFLA_RMNET_MUX_ID]);
+-		if (rmnet_get_endpoint(port, mux_id)) {
+-			NL_SET_ERR_MSG_MOD(extack, "MUX ID already exists");
+-			return -EINVAL;
+-		}
+-		ep = rmnet_get_endpoint(port, priv->mux_id);
+-		if (!ep)
+-			return -ENODEV;
+ 
+-		hlist_del_init_rcu(&ep->hlnode);
+-		hlist_add_head_rcu(&ep->hlnode, &port->muxed_ep[mux_id]);
++		if (mux_id != priv->mux_id) {
++			struct rmnet_endpoint *ep;
++
++			ep = rmnet_get_endpoint(port, priv->mux_id);
++			if (!ep)
++				return -ENODEV;
+ 
+-		ep->mux_id = mux_id;
+-		priv->mux_id = mux_id;
++			if (rmnet_get_endpoint(port, mux_id)) {
++				NL_SET_ERR_MSG_MOD(extack,
++						   "MUX ID already exists");
++				return -EINVAL;
++			}
++
++			hlist_del_init_rcu(&ep->hlnode);
++			hlist_add_head_rcu(&ep->hlnode,
++					   &port->muxed_ep[mux_id]);
++
++			ep->mux_id = mux_id;
++			priv->mux_id = mux_id;
++		}
+ 	}
+ 
+ 	if (data[IFLA_RMNET_FLAGS]) {
+diff --git a/drivers/net/wireless/ath/ath9k/main.c b/drivers/net/wireless/ath/ath9k/main.c
+index 0548aa3702e3..ef2b856670e1 100644
+--- a/drivers/net/wireless/ath/ath9k/main.c
++++ b/drivers/net/wireless/ath/ath9k/main.c
+@@ -1457,6 +1457,9 @@ static int ath9k_config(struct ieee80211_hw *hw, u32 changed)
+ 		ath_chanctx_set_channel(sc, ctx, &hw->conf.chandef);
+ 	}
+ 
++	if (changed & IEEE80211_CONF_CHANGE_POWER)
++		ath9k_set_txpower(sc, NULL);
++
+ 	mutex_unlock(&sc->mutex);
+ 	ath9k_ps_restore(sc);
+ 
+diff --git a/drivers/nvme/host/fc.c b/drivers/nvme/host/fc.c
+index 5a70ac395d53..c0c4b1587ba0 100644
+--- a/drivers/nvme/host/fc.c
++++ b/drivers/nvme/host/fc.c
+@@ -342,8 +342,7 @@ nvme_fc_register_localport(struct nvme_fc_port_info *pinfo,
+ 	    !template->ls_req || !template->fcp_io ||
+ 	    !template->ls_abort || !template->fcp_abort ||
+ 	    !template->max_hw_queues || !template->max_sgl_segments ||
+-	    !template->max_dif_sgl_segments || !template->dma_boundary ||
+-	    !template->module) {
++	    !template->max_dif_sgl_segments || !template->dma_boundary) {
+ 		ret = -EINVAL;
+ 		goto out_reghost_failed;
+ 	}
+@@ -2016,7 +2015,6 @@ nvme_fc_ctrl_free(struct kref *ref)
+ {
+ 	struct nvme_fc_ctrl *ctrl =
+ 		container_of(ref, struct nvme_fc_ctrl, ref);
+-	struct nvme_fc_lport *lport = ctrl->lport;
+ 	unsigned long flags;
+ 
+ 	if (ctrl->ctrl.tagset) {
+@@ -2043,7 +2041,6 @@ nvme_fc_ctrl_free(struct kref *ref)
+ 	if (ctrl->ctrl.opts)
+ 		nvmf_free_options(ctrl->ctrl.opts);
+ 	kfree(ctrl);
+-	module_put(lport->ops->module);
+ }
+ 
+ static void
+@@ -3074,15 +3071,10 @@ nvme_fc_init_ctrl(struct device *dev, struct nvmf_ctrl_options *opts,
+ 		goto out_fail;
+ 	}
+ 
+-	if (!try_module_get(lport->ops->module)) {
+-		ret = -EUNATCH;
+-		goto out_free_ctrl;
+-	}
+-
+ 	idx = ida_simple_get(&nvme_fc_ctrl_cnt, 0, 0, GFP_KERNEL);
+ 	if (idx < 0) {
+ 		ret = -ENOSPC;
+-		goto out_mod_put;
++		goto out_free_ctrl;
+ 	}
+ 
+ 	ctrl->ctrl.opts = opts;
+@@ -3235,8 +3227,6 @@ out_free_queues:
+ out_free_ida:
+ 	put_device(ctrl->dev);
+ 	ida_simple_remove(&nvme_fc_ctrl_cnt, ctrl->cnum);
+-out_mod_put:
+-	module_put(lport->ops->module);
+ out_free_ctrl:
+ 	kfree(ctrl);
+ out_fail:
+diff --git a/drivers/nvme/target/fcloop.c b/drivers/nvme/target/fcloop.c
+index 1c50af6219f3..b50b53db3746 100644
+--- a/drivers/nvme/target/fcloop.c
++++ b/drivers/nvme/target/fcloop.c
+@@ -850,7 +850,6 @@ fcloop_targetport_delete(struct nvmet_fc_target_port *targetport)
+ #define FCLOOP_DMABOUND_4G		0xFFFFFFFF
+ 
+ static struct nvme_fc_port_template fctemplate = {
+-	.module			= THIS_MODULE,
+ 	.localport_delete	= fcloop_localport_delete,
+ 	.remoteport_delete	= fcloop_remoteport_delete,
+ 	.create_queue		= fcloop_create_queue,
+diff --git a/drivers/nvme/target/tcp.c b/drivers/nvme/target/tcp.c
+index 5bb5342b8d0c..5b535f2e7161 100644
+--- a/drivers/nvme/target/tcp.c
++++ b/drivers/nvme/target/tcp.c
+@@ -794,7 +794,7 @@ static int nvmet_tcp_handle_icreq(struct nvmet_tcp_queue *queue)
+ 	icresp->hdr.pdo = 0;
+ 	icresp->hdr.plen = cpu_to_le32(icresp->hdr.hlen);
+ 	icresp->pfv = cpu_to_le16(NVME_TCP_PFV_1_0);
+-	icresp->maxdata = cpu_to_le32(0xffff); /* FIXME: support r2t */
++	icresp->maxdata = cpu_to_le32(0x400000); /* 16M arbitrary limit */
+ 	icresp->cpda = 0;
+ 	if (queue->hdr_digest)
+ 		icresp->digest |= NVME_TCP_HDR_DIGEST_ENABLE;
+diff --git a/drivers/pci/controller/dwc/pcie-qcom.c b/drivers/pci/controller/dwc/pcie-qcom.c
+index 5ea527a6bd9f..138e1a2d21cc 100644
+--- a/drivers/pci/controller/dwc/pcie-qcom.c
++++ b/drivers/pci/controller/dwc/pcie-qcom.c
+@@ -1439,7 +1439,13 @@ static void qcom_fixup_class(struct pci_dev *dev)
+ {
+ 	dev->class = PCI_CLASS_BRIDGE_PCI << 8;
+ }
+-DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, PCI_ANY_ID, qcom_fixup_class);
++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x0101, qcom_fixup_class);
++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x0104, qcom_fixup_class);
++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x0106, qcom_fixup_class);
++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x0107, qcom_fixup_class);
++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x0302, qcom_fixup_class);
++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x1000, qcom_fixup_class);
++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_QCOM, 0x1001, qcom_fixup_class);
+ 
+ static struct platform_driver qcom_pcie_driver = {
+ 	.probe = qcom_pcie_probe,
+diff --git a/drivers/pci/endpoint/pci-epc-mem.c b/drivers/pci/endpoint/pci-epc-mem.c
+index d2b174ce15de..abfac1109a13 100644
+--- a/drivers/pci/endpoint/pci-epc-mem.c
++++ b/drivers/pci/endpoint/pci-epc-mem.c
+@@ -79,6 +79,7 @@ int __pci_epc_mem_init(struct pci_epc *epc, phys_addr_t phys_base, size_t size,
+ 	mem->page_size = page_size;
+ 	mem->pages = pages;
+ 	mem->size = size;
++	mutex_init(&mem->lock);
+ 
+ 	epc->mem = mem;
+ 
+@@ -122,7 +123,7 @@ void __iomem *pci_epc_mem_alloc_addr(struct pci_epc *epc,
+ 				     phys_addr_t *phys_addr, size_t size)
+ {
+ 	int pageno;
+-	void __iomem *virt_addr;
++	void __iomem *virt_addr = NULL;
+ 	struct pci_epc_mem *mem = epc->mem;
+ 	unsigned int page_shift = ilog2(mem->page_size);
+ 	int order;
+@@ -130,15 +131,18 @@ void __iomem *pci_epc_mem_alloc_addr(struct pci_epc *epc,
+ 	size = ALIGN(size, mem->page_size);
+ 	order = pci_epc_mem_get_order(mem, size);
+ 
++	mutex_lock(&mem->lock);
+ 	pageno = bitmap_find_free_region(mem->bitmap, mem->pages, order);
+ 	if (pageno < 0)
+-		return NULL;
++		goto ret;
+ 
+ 	*phys_addr = mem->phys_base + ((phys_addr_t)pageno << page_shift);
+ 	virt_addr = ioremap(*phys_addr, size);
+ 	if (!virt_addr)
+ 		bitmap_release_region(mem->bitmap, pageno, order);
+ 
++ret:
++	mutex_unlock(&mem->lock);
+ 	return virt_addr;
+ }
+ EXPORT_SYMBOL_GPL(pci_epc_mem_alloc_addr);
+@@ -164,7 +168,9 @@ void pci_epc_mem_free_addr(struct pci_epc *epc, phys_addr_t phys_addr,
+ 	pageno = (phys_addr - mem->phys_base) >> page_shift;
+ 	size = ALIGN(size, mem->page_size);
+ 	order = pci_epc_mem_get_order(mem, size);
++	mutex_lock(&mem->lock);
+ 	bitmap_release_region(mem->bitmap, pageno, order);
++	mutex_unlock(&mem->lock);
+ }
+ EXPORT_SYMBOL_GPL(pci_epc_mem_free_addr);
+ 
+diff --git a/drivers/pci/hotplug/pciehp_hpc.c b/drivers/pci/hotplug/pciehp_hpc.c
+index 8a2cb1764386..14e6dccae8f1 100644
+--- a/drivers/pci/hotplug/pciehp_hpc.c
++++ b/drivers/pci/hotplug/pciehp_hpc.c
+@@ -625,17 +625,15 @@ static irqreturn_t pciehp_ist(int irq, void *dev_id)
+ 	if (atomic_fetch_and(~RERUN_ISR, &ctrl->pending_events) & RERUN_ISR) {
+ 		ret = pciehp_isr(irq, dev_id);
+ 		enable_irq(irq);
+-		if (ret != IRQ_WAKE_THREAD) {
+-			pci_config_pm_runtime_put(pdev);
+-			return ret;
+-		}
++		if (ret != IRQ_WAKE_THREAD)
++			goto out;
+ 	}
+ 
+ 	synchronize_hardirq(irq);
+ 	events = atomic_xchg(&ctrl->pending_events, 0);
+ 	if (!events) {
+-		pci_config_pm_runtime_put(pdev);
+-		return IRQ_NONE;
++		ret = IRQ_NONE;
++		goto out;
+ 	}
+ 
+ 	/* Check Attention Button Pressed */
+@@ -664,10 +662,12 @@ static irqreturn_t pciehp_ist(int irq, void *dev_id)
+ 		pciehp_handle_presence_or_link_change(ctrl, events);
+ 	up_read(&ctrl->reset_lock);
+ 
++	ret = IRQ_HANDLED;
++out:
+ 	pci_config_pm_runtime_put(pdev);
+ 	ctrl->ist_running = false;
+ 	wake_up(&ctrl->requester);
+-	return IRQ_HANDLED;
++	return ret;
+ }
+ 
+ static int pciehp_poll(void *data)
+diff --git a/drivers/pci/pcie/aspm.c b/drivers/pci/pcie/aspm.c
+index 0dcd44308228..c2596e79ec63 100644
+--- a/drivers/pci/pcie/aspm.c
++++ b/drivers/pci/pcie/aspm.c
+@@ -747,9 +747,9 @@ static void pcie_config_aspm_l1ss(struct pcie_link_state *link, u32 state)
+ 
+ 	/* Enable what we need to enable */
+ 	pci_clear_and_set_dword(parent, up_cap_ptr + PCI_L1SS_CTL1,
+-				PCI_L1SS_CAP_L1_PM_SS, val);
++				PCI_L1SS_CTL1_L1SS_MASK, val);
+ 	pci_clear_and_set_dword(child, dw_cap_ptr + PCI_L1SS_CTL1,
+-				PCI_L1SS_CAP_L1_PM_SS, val);
++				PCI_L1SS_CTL1_L1SS_MASK, val);
+ }
+ 
+ static void pcie_config_aspm_dev(struct pci_dev *pdev, u32 val)
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index 29f473ebf20f..b7347bc6a24d 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -1970,26 +1970,92 @@ DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL,	PCI_DEVICE_ID_INTEL_80332_1,	quirk
+ /*
+  * IO-APIC1 on 6300ESB generates boot interrupts, see Intel order no
+  * 300641-004US, section 5.7.3.
++ *
++ * Core IO on Xeon E5 1600/2600/4600, see Intel order no 326509-003.
++ * Core IO on Xeon E5 v2, see Intel order no 329188-003.
++ * Core IO on Xeon E7 v2, see Intel order no 329595-002.
++ * Core IO on Xeon E5 v3, see Intel order no 330784-003.
++ * Core IO on Xeon E7 v3, see Intel order no 332315-001US.
++ * Core IO on Xeon E5 v4, see Intel order no 333810-002US.
++ * Core IO on Xeon E7 v4, see Intel order no 332315-001US.
++ * Core IO on Xeon D-1500, see Intel order no 332051-001.
++ * Core IO on Xeon Scalable, see Intel order no 610950.
+  */
+-#define INTEL_6300_IOAPIC_ABAR		0x40
++#define INTEL_6300_IOAPIC_ABAR		0x40	/* Bus 0, Dev 29, Func 5 */
+ #define INTEL_6300_DISABLE_BOOT_IRQ	(1<<14)
+ 
++#define INTEL_CIPINTRC_CFG_OFFSET	0x14C	/* Bus 0, Dev 5, Func 0 */
++#define INTEL_CIPINTRC_DIS_INTX_ICH	(1<<25)
++
+ static void quirk_disable_intel_boot_interrupt(struct pci_dev *dev)
+ {
+ 	u16 pci_config_word;
++	u32 pci_config_dword;
+ 
+ 	if (noioapicquirk)
+ 		return;
+ 
+-	pci_read_config_word(dev, INTEL_6300_IOAPIC_ABAR, &pci_config_word);
+-	pci_config_word |= INTEL_6300_DISABLE_BOOT_IRQ;
+-	pci_write_config_word(dev, INTEL_6300_IOAPIC_ABAR, pci_config_word);
+-
++	switch (dev->device) {
++	case PCI_DEVICE_ID_INTEL_ESB_10:
++		pci_read_config_word(dev, INTEL_6300_IOAPIC_ABAR,
++				     &pci_config_word);
++		pci_config_word |= INTEL_6300_DISABLE_BOOT_IRQ;
++		pci_write_config_word(dev, INTEL_6300_IOAPIC_ABAR,
++				      pci_config_word);
++		break;
++	case 0x3c28:	/* Xeon E5 1600/2600/4600	*/
++	case 0x0e28:	/* Xeon E5/E7 V2		*/
++	case 0x2f28:	/* Xeon E5/E7 V3,V4		*/
++	case 0x6f28:	/* Xeon D-1500			*/
++	case 0x2034:	/* Xeon Scalable Family		*/
++		pci_read_config_dword(dev, INTEL_CIPINTRC_CFG_OFFSET,
++				      &pci_config_dword);
++		pci_config_dword |= INTEL_CIPINTRC_DIS_INTX_ICH;
++		pci_write_config_dword(dev, INTEL_CIPINTRC_CFG_OFFSET,
++				       pci_config_dword);
++		break;
++	default:
++		return;
++	}
+ 	pci_info(dev, "disabled boot interrupts on device [%04x:%04x]\n",
+ 		 dev->vendor, dev->device);
+ }
+-DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL,   PCI_DEVICE_ID_INTEL_ESB_10,	quirk_disable_intel_boot_interrupt);
+-DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL,   PCI_DEVICE_ID_INTEL_ESB_10,	quirk_disable_intel_boot_interrupt);
++/*
++ * Device 29 Func 5 Device IDs of IO-APIC
++ * containing ABAR—APIC1 Alternate Base Address Register
++ */
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL,	PCI_DEVICE_ID_INTEL_ESB_10,
++		quirk_disable_intel_boot_interrupt);
++DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL,	PCI_DEVICE_ID_INTEL_ESB_10,
++		quirk_disable_intel_boot_interrupt);
++
++/*
++ * Device 5 Func 0 Device IDs of Core IO modules/hubs
++ * containing Coherent Interface Protocol Interrupt Control
++ *
++ * Device IDs obtained from volume 2 datasheets of commented
++ * families above.
++ */
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL,	0x3c28,
++		quirk_disable_intel_boot_interrupt);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL,	0x0e28,
++		quirk_disable_intel_boot_interrupt);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL,	0x2f28,
++		quirk_disable_intel_boot_interrupt);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL,	0x6f28,
++		quirk_disable_intel_boot_interrupt);
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL,	0x2034,
++		quirk_disable_intel_boot_interrupt);
++DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL,	0x3c28,
++		quirk_disable_intel_boot_interrupt);
++DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL,	0x0e28,
++		quirk_disable_intel_boot_interrupt);
++DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL,	0x2f28,
++		quirk_disable_intel_boot_interrupt);
++DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL,	0x6f28,
++		quirk_disable_intel_boot_interrupt);
++DECLARE_PCI_FIXUP_RESUME(PCI_VENDOR_ID_INTEL,	0x2034,
++		quirk_disable_intel_boot_interrupt);
+ 
+ /* Disable boot interrupts on HT-1000 */
+ #define BC_HT1000_FEATURE_REG		0x64
+diff --git a/drivers/pci/switch/switchtec.c b/drivers/pci/switch/switchtec.c
+index a823b4b8ef8a..81dc7ac01381 100644
+--- a/drivers/pci/switch/switchtec.c
++++ b/drivers/pci/switch/switchtec.c
+@@ -175,7 +175,7 @@ static int mrpc_queue_cmd(struct switchtec_user *stuser)
+ 	kref_get(&stuser->kref);
+ 	stuser->read_len = sizeof(stuser->data);
+ 	stuser_set_state(stuser, MRPC_QUEUED);
+-	init_completion(&stuser->comp);
++	reinit_completion(&stuser->comp);
+ 	list_add_tail(&stuser->list, &stdev->mrpc_queue);
+ 
+ 	mrpc_cmd_submit(stdev);
+diff --git a/drivers/platform/x86/asus-wmi.c b/drivers/platform/x86/asus-wmi.c
+index 612ef5526226..01becbe2a9a8 100644
+--- a/drivers/platform/x86/asus-wmi.c
++++ b/drivers/platform/x86/asus-wmi.c
+@@ -426,8 +426,11 @@ static int asus_wmi_battery_add(struct power_supply *battery)
+ {
+ 	/* The WMI method does not provide a way to specific a battery, so we
+ 	 * just assume it is the first battery.
++	 * Note: On some newer ASUS laptops (Zenbook UM431DA), the primary/first
++	 * battery is named BATT.
+ 	 */
+-	if (strcmp(battery->desc->name, "BAT0") != 0)
++	if (strcmp(battery->desc->name, "BAT0") != 0 &&
++	    strcmp(battery->desc->name, "BATT") != 0)
+ 		return -ENODEV;
+ 
+ 	if (device_create_file(&battery->dev,
+diff --git a/drivers/remoteproc/qcom_q6v5_mss.c b/drivers/remoteproc/qcom_q6v5_mss.c
+index a1cc9cbe038f..0b1d737b0e97 100644
+--- a/drivers/remoteproc/qcom_q6v5_mss.c
++++ b/drivers/remoteproc/qcom_q6v5_mss.c
+@@ -1001,11 +1001,6 @@ static void q6v5_mba_reclaim(struct q6v5 *qproc)
+ 		writel(val, qproc->reg_base + QDSP6SS_PWR_CTL_REG);
+ 	}
+ 
+-	ret = q6v5_xfer_mem_ownership(qproc, &qproc->mpss_perm,
+-				      false, qproc->mpss_phys,
+-				      qproc->mpss_size);
+-	WARN_ON(ret);
+-
+ 	q6v5_reset_assert(qproc);
+ 
+ 	q6v5_clk_disable(qproc->dev, qproc->reset_clks,
+@@ -1035,6 +1030,23 @@ static void q6v5_mba_reclaim(struct q6v5 *qproc)
+ 	}
+ }
+ 
++static int q6v5_reload_mba(struct rproc *rproc)
++{
++	struct q6v5 *qproc = rproc->priv;
++	const struct firmware *fw;
++	int ret;
++
++	ret = request_firmware(&fw, rproc->firmware, qproc->dev);
++	if (ret < 0)
++		return ret;
++
++	q6v5_load(rproc, fw);
++	ret = q6v5_mba_load(qproc);
++	release_firmware(fw);
++
++	return ret;
++}
++
+ static int q6v5_mpss_load(struct q6v5 *qproc)
+ {
+ 	const struct elf32_phdr *phdrs;
+@@ -1095,6 +1107,14 @@ static int q6v5_mpss_load(struct q6v5 *qproc)
+ 			max_addr = ALIGN(phdr->p_paddr + phdr->p_memsz, SZ_4K);
+ 	}
+ 
++	/**
++	 * In case of a modem subsystem restart on secure devices, the modem
++	 * memory can be reclaimed only after MBA is loaded. For modem cold
++	 * boot this will be a nop
++	 */
++	q6v5_xfer_mem_ownership(qproc, &qproc->mpss_perm, false,
++				qproc->mpss_phys, qproc->mpss_size);
++
+ 	mpss_reloc = relocate ? min_addr : qproc->mpss_phys;
+ 	qproc->mpss_reloc = mpss_reloc;
+ 	/* Load firmware segments */
+@@ -1184,8 +1204,16 @@ static void qcom_q6v5_dump_segment(struct rproc *rproc,
+ 	void *ptr = rproc_da_to_va(rproc, segment->da, segment->size);
+ 
+ 	/* Unlock mba before copying segments */
+-	if (!qproc->dump_mba_loaded)
+-		ret = q6v5_mba_load(qproc);
++	if (!qproc->dump_mba_loaded) {
++		ret = q6v5_reload_mba(rproc);
++		if (!ret) {
++			/* Reset ownership back to Linux to copy segments */
++			ret = q6v5_xfer_mem_ownership(qproc, &qproc->mpss_perm,
++						      false,
++						      qproc->mpss_phys,
++						      qproc->mpss_size);
++		}
++	}
+ 
+ 	if (!ptr || ret)
+ 		memset(dest, 0xff, segment->size);
+@@ -1196,8 +1224,14 @@ static void qcom_q6v5_dump_segment(struct rproc *rproc,
+ 
+ 	/* Reclaim mba after copying segments */
+ 	if (qproc->dump_segment_mask == qproc->dump_complete_mask) {
+-		if (qproc->dump_mba_loaded)
++		if (qproc->dump_mba_loaded) {
++			/* Try to reset ownership back to Q6 */
++			q6v5_xfer_mem_ownership(qproc, &qproc->mpss_perm,
++						true,
++						qproc->mpss_phys,
++						qproc->mpss_size);
+ 			q6v5_mba_reclaim(qproc);
++		}
+ 	}
+ }
+ 
+@@ -1237,10 +1271,6 @@ static int q6v5_start(struct rproc *rproc)
+ 	return 0;
+ 
+ reclaim_mpss:
+-	xfermemop_ret = q6v5_xfer_mem_ownership(qproc, &qproc->mpss_perm,
+-						false, qproc->mpss_phys,
+-						qproc->mpss_size);
+-	WARN_ON(xfermemop_ret);
+ 	q6v5_mba_reclaim(qproc);
+ 
+ 	return ret;
+diff --git a/drivers/remoteproc/remoteproc_virtio.c b/drivers/remoteproc/remoteproc_virtio.c
+index 8c07cb2ca8ba..31a62a0b470e 100644
+--- a/drivers/remoteproc/remoteproc_virtio.c
++++ b/drivers/remoteproc/remoteproc_virtio.c
+@@ -334,6 +334,13 @@ int rproc_add_virtio_dev(struct rproc_vdev *rvdev, int id)
+ 	struct rproc_mem_entry *mem;
+ 	int ret;
+ 
++	if (rproc->ops->kick == NULL) {
++		ret = -EINVAL;
++		dev_err(dev, ".kick method not defined for %s",
++				rproc->name);
++		goto out;
++	}
++
+ 	/* Try to find dedicated vdev buffer carveout */
+ 	mem = rproc_find_carveout_by_name(rproc, "vdev%dbuffer", rvdev->index);
+ 	if (mem) {
+diff --git a/drivers/s390/scsi/zfcp_erp.c b/drivers/s390/scsi/zfcp_erp.c
+index 93655b85b73f..18a6751299f9 100644
+--- a/drivers/s390/scsi/zfcp_erp.c
++++ b/drivers/s390/scsi/zfcp_erp.c
+@@ -725,7 +725,7 @@ static void zfcp_erp_enqueue_ptp_port(struct zfcp_adapter *adapter)
+ 				 adapter->peer_d_id);
+ 	if (IS_ERR(port)) /* error or port already attached */
+ 		return;
+-	_zfcp_erp_port_reopen(port, 0, "ereptp1");
++	zfcp_erp_port_reopen(port, 0, "ereptp1");
+ }
+ 
+ static enum zfcp_erp_act_result zfcp_erp_adapter_strat_fsf_xconf(
+diff --git a/drivers/scsi/lpfc/lpfc.h b/drivers/scsi/lpfc/lpfc.h
+index 04d73e2be373..3f2cb17c4574 100644
+--- a/drivers/scsi/lpfc/lpfc.h
++++ b/drivers/scsi/lpfc/lpfc.h
+@@ -749,6 +749,7 @@ struct lpfc_hba {
+ 					 * capability
+ 					 */
+ #define HBA_FLOGI_ISSUED	0x100000 /* FLOGI was issued */
++#define HBA_DEFER_FLOGI		0x800000 /* Defer FLOGI till read_sparm cmpl */
+ 
+ 	uint32_t fcp_ring_in_use; /* When polling test if intr-hndlr active*/
+ 	struct lpfc_dmabuf slim2p;
+diff --git a/drivers/scsi/lpfc/lpfc_hbadisc.c b/drivers/scsi/lpfc/lpfc_hbadisc.c
+index dcc8999c6a68..6a2bdae0e52a 100644
+--- a/drivers/scsi/lpfc/lpfc_hbadisc.c
++++ b/drivers/scsi/lpfc/lpfc_hbadisc.c
+@@ -1163,13 +1163,16 @@ lpfc_mbx_cmpl_local_config_link(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
+ 	}
+ 
+ 	/* Start discovery by sending a FLOGI. port_state is identically
+-	 * LPFC_FLOGI while waiting for FLOGI cmpl
++	 * LPFC_FLOGI while waiting for FLOGI cmpl. Check if sending
++	 * the FLOGI is being deferred till after MBX_READ_SPARAM completes.
+ 	 */
+-	if (vport->port_state != LPFC_FLOGI)
+-		lpfc_initial_flogi(vport);
+-	else if (vport->fc_flag & FC_PT2PT)
+-		lpfc_disc_start(vport);
+-
++	if (vport->port_state != LPFC_FLOGI) {
++		if (!(phba->hba_flag & HBA_DEFER_FLOGI))
++			lpfc_initial_flogi(vport);
++	} else {
++		if (vport->fc_flag & FC_PT2PT)
++			lpfc_disc_start(vport);
++	}
+ 	return;
+ 
+ out:
+@@ -3094,6 +3097,14 @@ lpfc_mbx_cmpl_read_sparam(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
+ 	lpfc_mbuf_free(phba, mp->virt, mp->phys);
+ 	kfree(mp);
+ 	mempool_free(pmb, phba->mbox_mem_pool);
++
++	/* Check if sending the FLOGI is being deferred to after we get
++	 * up to date CSPs from MBX_READ_SPARAM.
++	 */
++	if (phba->hba_flag & HBA_DEFER_FLOGI) {
++		lpfc_initial_flogi(vport);
++		phba->hba_flag &= ~HBA_DEFER_FLOGI;
++	}
+ 	return;
+ 
+ out:
+@@ -3224,6 +3235,23 @@ lpfc_mbx_process_link_up(struct lpfc_hba *phba, struct lpfc_mbx_read_top *la)
+ 	}
+ 
+ 	lpfc_linkup(phba);
++	sparam_mbox = NULL;
++
++	if (!(phba->hba_flag & HBA_FCOE_MODE)) {
++		cfglink_mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
++		if (!cfglink_mbox)
++			goto out;
++		vport->port_state = LPFC_LOCAL_CFG_LINK;
++		lpfc_config_link(phba, cfglink_mbox);
++		cfglink_mbox->vport = vport;
++		cfglink_mbox->mbox_cmpl = lpfc_mbx_cmpl_local_config_link;
++		rc = lpfc_sli_issue_mbox(phba, cfglink_mbox, MBX_NOWAIT);
++		if (rc == MBX_NOT_FINISHED) {
++			mempool_free(cfglink_mbox, phba->mbox_mem_pool);
++			goto out;
++		}
++	}
++
+ 	sparam_mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
+ 	if (!sparam_mbox)
+ 		goto out;
+@@ -3244,20 +3272,7 @@ lpfc_mbx_process_link_up(struct lpfc_hba *phba, struct lpfc_mbx_read_top *la)
+ 		goto out;
+ 	}
+ 
+-	if (!(phba->hba_flag & HBA_FCOE_MODE)) {
+-		cfglink_mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL);
+-		if (!cfglink_mbox)
+-			goto out;
+-		vport->port_state = LPFC_LOCAL_CFG_LINK;
+-		lpfc_config_link(phba, cfglink_mbox);
+-		cfglink_mbox->vport = vport;
+-		cfglink_mbox->mbox_cmpl = lpfc_mbx_cmpl_local_config_link;
+-		rc = lpfc_sli_issue_mbox(phba, cfglink_mbox, MBX_NOWAIT);
+-		if (rc == MBX_NOT_FINISHED) {
+-			mempool_free(cfglink_mbox, phba->mbox_mem_pool);
+-			goto out;
+-		}
+-	} else {
++	if (phba->hba_flag & HBA_FCOE_MODE) {
+ 		vport->port_state = LPFC_VPORT_UNKNOWN;
+ 		/*
+ 		 * Add the driver's default FCF record at FCF index 0 now. This
+@@ -3314,6 +3329,10 @@ lpfc_mbx_process_link_up(struct lpfc_hba *phba, struct lpfc_mbx_read_top *la)
+ 		}
+ 		/* Reset FCF roundrobin bmask for new discovery */
+ 		lpfc_sli4_clear_fcf_rr_bmask(phba);
++	} else {
++		if (phba->bbcredit_support && phba->cfg_enable_bbcr &&
++		    !(phba->link_flag & LS_LOOPBACK_MODE))
++			phba->hba_flag |= HBA_DEFER_FLOGI;
+ 	}
+ 
+ 	/* Prepare for LINK up registrations */
+diff --git a/drivers/scsi/lpfc/lpfc_nvme.c b/drivers/scsi/lpfc/lpfc_nvme.c
+index f6c8963c915d..db4a04a207ec 100644
+--- a/drivers/scsi/lpfc/lpfc_nvme.c
++++ b/drivers/scsi/lpfc/lpfc_nvme.c
+@@ -1985,8 +1985,6 @@ out_unlock:
+ 
+ /* Declare and initialization an instance of the FC NVME template. */
+ static struct nvme_fc_port_template lpfc_nvme_template = {
+-	.module	= THIS_MODULE,
+-
+ 	/* initiator-based functions */
+ 	.localport_delete  = lpfc_nvme_localport_delete,
+ 	.remoteport_delete = lpfc_nvme_remoteport_delete,
+diff --git a/drivers/scsi/lpfc/lpfc_scsi.c b/drivers/scsi/lpfc/lpfc_scsi.c
+index 2c7e0b22db2f..96ac4a154c58 100644
+--- a/drivers/scsi/lpfc/lpfc_scsi.c
++++ b/drivers/scsi/lpfc/lpfc_scsi.c
+@@ -671,8 +671,10 @@ lpfc_get_scsi_buf_s4(struct lpfc_hba *phba, struct lpfc_nodelist *ndlp,
+ 	lpfc_cmd->prot_data_type = 0;
+ #endif
+ 	tmp = lpfc_get_cmd_rsp_buf_per_hdwq(phba, lpfc_cmd);
+-	if (!tmp)
++	if (!tmp) {
++		lpfc_release_io_buf(phba, lpfc_cmd, lpfc_cmd->hdwq);
+ 		return NULL;
++	}
+ 
+ 	lpfc_cmd->fcp_cmnd = tmp->fcp_cmnd;
+ 	lpfc_cmd->fcp_rsp = tmp->fcp_rsp;
+diff --git a/drivers/scsi/mpt3sas/mpt3sas_scsih.c b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+index c597d544eb39..a8ec1caf9c77 100644
+--- a/drivers/scsi/mpt3sas/mpt3sas_scsih.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_scsih.c
+@@ -9908,8 +9908,8 @@ static void scsih_remove(struct pci_dev *pdev)
+ 
+ 	ioc->remove_host = 1;
+ 
+-	mpt3sas_wait_for_commands_to_complete(ioc);
+-	_scsih_flush_running_cmds(ioc);
++	if (!pci_device_is_present(pdev))
++		_scsih_flush_running_cmds(ioc);
+ 
+ 	_scsih_fw_event_cleanup_queue(ioc);
+ 
+@@ -9992,8 +9992,8 @@ scsih_shutdown(struct pci_dev *pdev)
+ 
+ 	ioc->remove_host = 1;
+ 
+-	mpt3sas_wait_for_commands_to_complete(ioc);
+-	_scsih_flush_running_cmds(ioc);
++	if (!pci_device_is_present(pdev))
++		_scsih_flush_running_cmds(ioc);
+ 
+ 	_scsih_fw_event_cleanup_queue(ioc);
+ 
+diff --git a/drivers/scsi/qla2xxx/qla_nvme.c b/drivers/scsi/qla2xxx/qla_nvme.c
+index bfcd02fdf2b8..941aa53363f5 100644
+--- a/drivers/scsi/qla2xxx/qla_nvme.c
++++ b/drivers/scsi/qla2xxx/qla_nvme.c
+@@ -610,7 +610,6 @@ static void qla_nvme_remoteport_delete(struct nvme_fc_remote_port *rport)
+ }
+ 
+ static struct nvme_fc_port_template qla_nvme_fc_transport = {
+-	.module	= THIS_MODULE,
+ 	.localport_delete = qla_nvme_localport_delete,
+ 	.remoteport_delete = qla_nvme_remoteport_delete,
+ 	.create_queue   = qla_nvme_alloc_queue,
+diff --git a/drivers/scsi/sr.c b/drivers/scsi/sr.c
+index e4240e4ae8bb..d2fe3fa470f9 100644
+--- a/drivers/scsi/sr.c
++++ b/drivers/scsi/sr.c
+@@ -79,7 +79,6 @@ MODULE_ALIAS_SCSI_DEVICE(TYPE_WORM);
+ 	 CDC_CD_R|CDC_CD_RW|CDC_DVD|CDC_DVD_R|CDC_DVD_RAM|CDC_GENERIC_PACKET| \
+ 	 CDC_MRW|CDC_MRW_W|CDC_RAM)
+ 
+-static DEFINE_MUTEX(sr_mutex);
+ static int sr_probe(struct device *);
+ static int sr_remove(struct device *);
+ static blk_status_t sr_init_command(struct scsi_cmnd *SCpnt);
+@@ -536,9 +535,9 @@ static int sr_block_open(struct block_device *bdev, fmode_t mode)
+ 	scsi_autopm_get_device(sdev);
+ 	check_disk_change(bdev);
+ 
+-	mutex_lock(&sr_mutex);
++	mutex_lock(&cd->lock);
+ 	ret = cdrom_open(&cd->cdi, bdev, mode);
+-	mutex_unlock(&sr_mutex);
++	mutex_unlock(&cd->lock);
+ 
+ 	scsi_autopm_put_device(sdev);
+ 	if (ret)
+@@ -551,10 +550,12 @@ out:
+ static void sr_block_release(struct gendisk *disk, fmode_t mode)
+ {
+ 	struct scsi_cd *cd = scsi_cd(disk);
+-	mutex_lock(&sr_mutex);
++
++	mutex_lock(&cd->lock);
+ 	cdrom_release(&cd->cdi, mode);
++	mutex_unlock(&cd->lock);
++
+ 	scsi_cd_put(cd);
+-	mutex_unlock(&sr_mutex);
+ }
+ 
+ static int sr_block_ioctl(struct block_device *bdev, fmode_t mode, unsigned cmd,
+@@ -565,7 +566,7 @@ static int sr_block_ioctl(struct block_device *bdev, fmode_t mode, unsigned cmd,
+ 	void __user *argp = (void __user *)arg;
+ 	int ret;
+ 
+-	mutex_lock(&sr_mutex);
++	mutex_lock(&cd->lock);
+ 
+ 	ret = scsi_ioctl_block_when_processing_errors(sdev, cmd,
+ 			(mode & FMODE_NDELAY) != 0);
+@@ -595,7 +596,7 @@ put:
+ 	scsi_autopm_put_device(sdev);
+ 
+ out:
+-	mutex_unlock(&sr_mutex);
++	mutex_unlock(&cd->lock);
+ 	return ret;
+ }
+ 
+@@ -608,7 +609,7 @@ static int sr_block_compat_ioctl(struct block_device *bdev, fmode_t mode, unsign
+ 	void __user *argp = compat_ptr(arg);
+ 	int ret;
+ 
+-	mutex_lock(&sr_mutex);
++	mutex_lock(&cd->lock);
+ 
+ 	ret = scsi_ioctl_block_when_processing_errors(sdev, cmd,
+ 			(mode & FMODE_NDELAY) != 0);
+@@ -638,7 +639,7 @@ put:
+ 	scsi_autopm_put_device(sdev);
+ 
+ out:
+-	mutex_unlock(&sr_mutex);
++	mutex_unlock(&cd->lock);
+ 	return ret;
+ 
+ }
+@@ -745,6 +746,7 @@ static int sr_probe(struct device *dev)
+ 	disk = alloc_disk(1);
+ 	if (!disk)
+ 		goto fail_free;
++	mutex_init(&cd->lock);
+ 
+ 	spin_lock(&sr_index_lock);
+ 	minor = find_first_zero_bit(sr_index_bits, SR_DISKS);
+@@ -1055,6 +1057,8 @@ static void sr_kref_release(struct kref *kref)
+ 
+ 	put_disk(disk);
+ 
++	mutex_destroy(&cd->lock);
++
+ 	kfree(cd);
+ }
+ 
+diff --git a/drivers/scsi/sr.h b/drivers/scsi/sr.h
+index a2bb7b8bace5..339c624e04d8 100644
+--- a/drivers/scsi/sr.h
++++ b/drivers/scsi/sr.h
+@@ -20,6 +20,7 @@
+ 
+ #include <linux/genhd.h>
+ #include <linux/kref.h>
++#include <linux/mutex.h>
+ 
+ #define MAX_RETRIES	3
+ #define SR_TIMEOUT	(30 * HZ)
+@@ -51,6 +52,7 @@ typedef struct scsi_cd {
+ 	bool ignore_get_event:1;	/* GET_EVENT is unreliable, use TUR */
+ 
+ 	struct cdrom_device_info cdi;
++	struct mutex lock;
+ 	/* We hold gendisk and scsi_device references on probe and use
+ 	 * the refs on this kref to decide when to release them */
+ 	struct kref kref;
+diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
+index 2d705694636c..06758a5d9eb1 100644
+--- a/drivers/scsi/ufs/ufshcd.c
++++ b/drivers/scsi/ufs/ufshcd.c
+@@ -5486,7 +5486,8 @@ static irqreturn_t ufshcd_update_uic_error(struct ufs_hba *hba)
+ static bool ufshcd_is_auto_hibern8_error(struct ufs_hba *hba,
+ 					 u32 intr_mask)
+ {
+-	if (!ufshcd_is_auto_hibern8_supported(hba))
++	if (!ufshcd_is_auto_hibern8_supported(hba) ||
++	    !ufshcd_is_auto_hibern8_enabled(hba))
+ 		return false;
+ 
+ 	if (!(intr_mask & UFSHCD_UIC_HIBERN8_MASK))
+diff --git a/drivers/scsi/ufs/ufshcd.h b/drivers/scsi/ufs/ufshcd.h
+index 2ae6c7c8528c..81c71a3e3474 100644
+--- a/drivers/scsi/ufs/ufshcd.h
++++ b/drivers/scsi/ufs/ufshcd.h
+@@ -55,6 +55,7 @@
+ #include <linux/clk.h>
+ #include <linux/completion.h>
+ #include <linux/regulator/consumer.h>
++#include <linux/bitfield.h>
+ #include "unipro.h"
+ 
+ #include <asm/irq.h>
+@@ -773,6 +774,11 @@ static inline bool ufshcd_is_auto_hibern8_supported(struct ufs_hba *hba)
+ 	return (hba->capabilities & MASK_AUTO_HIBERN8_SUPPORT);
+ }
+ 
++static inline bool ufshcd_is_auto_hibern8_enabled(struct ufs_hba *hba)
++{
++	return FIELD_GET(UFSHCI_AHIBERN8_TIMER_MASK, hba->ahit) ? true : false;
++}
++
+ #define ufshcd_writel(hba, val, reg)	\
+ 	writel((val), (hba)->mmio_base + (reg))
+ #define ufshcd_readl(hba, reg)	\
+diff --git a/drivers/spi/spi-fsl-dspi.c b/drivers/spi/spi-fsl-dspi.c
+index 6ec2dcb8c57a..1305030379e8 100644
+--- a/drivers/spi/spi-fsl-dspi.c
++++ b/drivers/spi/spi-fsl-dspi.c
+@@ -196,8 +196,7 @@ struct fsl_dspi {
+ 	u8					bytes_per_word;
+ 	const struct fsl_dspi_devtype_data	*devtype_data;
+ 
+-	wait_queue_head_t			waitq;
+-	u32					waitflags;
++	struct completion			xfer_done;
+ 
+ 	struct fsl_dspi_dma			*dma;
+ };
+@@ -714,10 +713,8 @@ static irqreturn_t dspi_interrupt(int irq, void *dev_id)
+ 	if (!(spi_sr & SPI_SR_EOQF))
+ 		return IRQ_NONE;
+ 
+-	if (dspi_rxtx(dspi) == 0) {
+-		dspi->waitflags = 1;
+-		wake_up_interruptible(&dspi->waitq);
+-	}
++	if (dspi_rxtx(dspi) == 0)
++		complete(&dspi->xfer_done);
+ 
+ 	return IRQ_HANDLED;
+ }
+@@ -815,13 +812,9 @@ static int dspi_transfer_one_message(struct spi_controller *ctlr,
+ 				status = dspi_poll(dspi);
+ 			} while (status == -EINPROGRESS);
+ 		} else if (trans_mode != DSPI_DMA_MODE) {
+-			status = wait_event_interruptible(dspi->waitq,
+-							  dspi->waitflags);
+-			dspi->waitflags = 0;
++			wait_for_completion(&dspi->xfer_done);
++			reinit_completion(&dspi->xfer_done);
+ 		}
+-		if (status)
+-			dev_err(&dspi->pdev->dev,
+-				"Waiting for transfer to complete failed!\n");
+ 
+ 		spi_transfer_delay_exec(transfer);
+ 	}
+@@ -1021,8 +1014,10 @@ static int dspi_slave_abort(struct spi_master *master)
+ 	 * Terminate all pending DMA transactions for the SPI working
+ 	 * in SLAVE mode.
+ 	 */
+-	dmaengine_terminate_sync(dspi->dma->chan_rx);
+-	dmaengine_terminate_sync(dspi->dma->chan_tx);
++	if (dspi->devtype_data->trans_mode == DSPI_DMA_MODE) {
++		dmaengine_terminate_sync(dspi->dma->chan_rx);
++		dmaengine_terminate_sync(dspi->dma->chan_tx);
++	}
+ 
+ 	/* Clear the internal DSPI RX and TX FIFO buffers */
+ 	regmap_update_bits(dspi->regmap, SPI_MCR,
+@@ -1159,7 +1154,7 @@ static int dspi_probe(struct platform_device *pdev)
+ 		goto out_clk_put;
+ 	}
+ 
+-	init_waitqueue_head(&dspi->waitq);
++	init_completion(&dspi->xfer_done);
+ 
+ poll_mode:
+ 
+diff --git a/drivers/staging/media/allegro-dvt/allegro-core.c b/drivers/staging/media/allegro-dvt/allegro-core.c
+index 3be41698df4c..8d8d144f40ac 100644
+--- a/drivers/staging/media/allegro-dvt/allegro-core.c
++++ b/drivers/staging/media/allegro-dvt/allegro-core.c
+@@ -393,7 +393,10 @@ struct mcu_msg_create_channel {
+ 	u32 freq_ird;
+ 	u32 freq_lt;
+ 	u32 gdr_mode;
+-	u32 gop_length;
++	u16 gop_length;
++	u8 num_b;
++	u8 freq_golden_ref;
++
+ 	u32 unknown39;
+ 
+ 	u32 subframe_latency;
+diff --git a/drivers/staging/media/hantro/hantro_h1_jpeg_enc.c b/drivers/staging/media/hantro/hantro_h1_jpeg_enc.c
+index 0d8afc3e5d71..4f72d92cd98f 100644
+--- a/drivers/staging/media/hantro/hantro_h1_jpeg_enc.c
++++ b/drivers/staging/media/hantro/hantro_h1_jpeg_enc.c
+@@ -67,12 +67,17 @@ hantro_h1_jpeg_enc_set_qtable(struct hantro_dev *vpu,
+ 			      unsigned char *chroma_qtable)
+ {
+ 	u32 reg, i;
++	__be32 *luma_qtable_p;
++	__be32 *chroma_qtable_p;
++
++	luma_qtable_p = (__be32 *)luma_qtable;
++	chroma_qtable_p = (__be32 *)chroma_qtable;
+ 
+ 	for (i = 0; i < H1_JPEG_QUANT_TABLE_COUNT; i++) {
+-		reg = get_unaligned_be32(&luma_qtable[i]);
++		reg = get_unaligned_be32(&luma_qtable_p[i]);
+ 		vepu_write_relaxed(vpu, reg, H1_REG_JPEG_LUMA_QUAT(i));
+ 
+-		reg = get_unaligned_be32(&chroma_qtable[i]);
++		reg = get_unaligned_be32(&chroma_qtable_p[i]);
+ 		vepu_write_relaxed(vpu, reg, H1_REG_JPEG_CHROMA_QUAT(i));
+ 	}
+ }
+diff --git a/drivers/staging/media/hantro/hantro_v4l2.c b/drivers/staging/media/hantro/hantro_v4l2.c
+index 0198bcda26b7..f4ae2cee0f18 100644
+--- a/drivers/staging/media/hantro/hantro_v4l2.c
++++ b/drivers/staging/media/hantro/hantro_v4l2.c
+@@ -295,7 +295,7 @@ static int vidioc_try_fmt(struct file *file, void *priv, struct v4l2_format *f,
+ 		 * +---------------------------+
+ 		 */
+ 		if (ctx->vpu_src_fmt->fourcc == V4L2_PIX_FMT_H264_SLICE &&
+-		    !hantro_needs_postproc(ctx, ctx->vpu_dst_fmt))
++		    !hantro_needs_postproc(ctx, fmt))
+ 			pix_mp->plane_fmt[0].sizeimage +=
+ 				64 * MB_WIDTH(pix_mp->width) *
+ 				     MB_WIDTH(pix_mp->height) + 32;
+diff --git a/drivers/staging/media/hantro/rk3399_vpu_hw_jpeg_enc.c b/drivers/staging/media/hantro/rk3399_vpu_hw_jpeg_enc.c
+index 4c2d43fb6fd1..a85c4f9fd10a 100644
+--- a/drivers/staging/media/hantro/rk3399_vpu_hw_jpeg_enc.c
++++ b/drivers/staging/media/hantro/rk3399_vpu_hw_jpeg_enc.c
+@@ -98,12 +98,17 @@ rk3399_vpu_jpeg_enc_set_qtable(struct hantro_dev *vpu,
+ 			       unsigned char *chroma_qtable)
+ {
+ 	u32 reg, i;
++	__be32 *luma_qtable_p;
++	__be32 *chroma_qtable_p;
++
++	luma_qtable_p = (__be32 *)luma_qtable;
++	chroma_qtable_p = (__be32 *)chroma_qtable;
+ 
+ 	for (i = 0; i < VEPU_JPEG_QUANT_TABLE_COUNT; i++) {
+-		reg = get_unaligned_be32(&luma_qtable[i]);
++		reg = get_unaligned_be32(&luma_qtable_p[i]);
+ 		vepu_write_relaxed(vpu, reg, VEPU_REG_JPEG_LUMA_QUAT(i));
+ 
+-		reg = get_unaligned_be32(&chroma_qtable[i]);
++		reg = get_unaligned_be32(&chroma_qtable_p[i]);
+ 		vepu_write_relaxed(vpu, reg, VEPU_REG_JPEG_CHROMA_QUAT(i));
+ 	}
+ }
+diff --git a/drivers/staging/media/imx/imx7-media-csi.c b/drivers/staging/media/imx/imx7-media-csi.c
+index db30e2c70f2f..f45920b3137e 100644
+--- a/drivers/staging/media/imx/imx7-media-csi.c
++++ b/drivers/staging/media/imx/imx7-media-csi.c
+@@ -1009,6 +1009,7 @@ static int imx7_csi_try_fmt(struct imx7_csi *csi,
+ 		sdformat->format.width = in_fmt->width;
+ 		sdformat->format.height = in_fmt->height;
+ 		sdformat->format.code = in_fmt->code;
++		sdformat->format.field = in_fmt->field;
+ 		*cc = in_cc;
+ 
+ 		sdformat->format.colorspace = in_fmt->colorspace;
+@@ -1023,6 +1024,9 @@ static int imx7_csi_try_fmt(struct imx7_csi *csi,
+ 							 false);
+ 			sdformat->format.code = (*cc)->codes[0];
+ 		}
++
++		if (sdformat->format.field != V4L2_FIELD_INTERLACED)
++			sdformat->format.field = V4L2_FIELD_NONE;
+ 		break;
+ 	default:
+ 		return -EINVAL;
+diff --git a/drivers/staging/media/imx/imx7-mipi-csis.c b/drivers/staging/media/imx/imx7-mipi-csis.c
+index 383abecb3bec..0053e8b0b88e 100644
+--- a/drivers/staging/media/imx/imx7-mipi-csis.c
++++ b/drivers/staging/media/imx/imx7-mipi-csis.c
+@@ -577,7 +577,7 @@ static int mipi_csis_s_stream(struct v4l2_subdev *mipi_sd, int enable)
+ 		state->flags |= ST_STREAMING;
+ 	} else {
+ 		v4l2_subdev_call(state->src_sd, video, s_stream, 0);
+-		ret = v4l2_subdev_call(state->src_sd, core, s_power, 1);
++		ret = v4l2_subdev_call(state->src_sd, core, s_power, 0);
+ 		mipi_csis_stop_stream(state);
+ 		state->flags &= ~ST_STREAMING;
+ 		if (state->debug)
+diff --git a/drivers/staging/media/rkisp1/rkisp1-dev.c b/drivers/staging/media/rkisp1/rkisp1-dev.c
+index 558126e66465..9b47f41b36e9 100644
+--- a/drivers/staging/media/rkisp1/rkisp1-dev.c
++++ b/drivers/staging/media/rkisp1/rkisp1-dev.c
+@@ -502,8 +502,7 @@ static int rkisp1_probe(struct platform_device *pdev)
+ 	strscpy(rkisp1->media_dev.model, RKISP1_DRIVER_NAME,
+ 		sizeof(rkisp1->media_dev.model));
+ 	rkisp1->media_dev.dev = &pdev->dev;
+-	strscpy(rkisp1->media_dev.bus_info,
+-		"platform: " RKISP1_DRIVER_NAME,
++	strscpy(rkisp1->media_dev.bus_info, RKISP1_BUS_INFO,
+ 		sizeof(rkisp1->media_dev.bus_info));
+ 	media_device_init(&rkisp1->media_dev);
+ 
+diff --git a/drivers/staging/media/rkisp1/rkisp1-isp.c b/drivers/staging/media/rkisp1/rkisp1-isp.c
+index 328c7ea60971..db892620a567 100644
+--- a/drivers/staging/media/rkisp1/rkisp1-isp.c
++++ b/drivers/staging/media/rkisp1/rkisp1-isp.c
+@@ -683,7 +683,7 @@ static void rkisp1_isp_set_src_fmt(struct rkisp1_isp *isp,
+ 
+ 	src_fmt->code = format->code;
+ 	mbus_info = rkisp1_isp_mbus_info_get(src_fmt->code);
+-	if (!mbus_info) {
++	if (!mbus_info || !(mbus_info->direction & RKISP1_DIR_SRC)) {
+ 		src_fmt->code = RKISP1_DEF_SRC_PAD_FMT;
+ 		mbus_info = rkisp1_isp_mbus_info_get(src_fmt->code);
+ 	}
+@@ -767,7 +767,7 @@ static void rkisp1_isp_set_sink_fmt(struct rkisp1_isp *isp,
+ 					  which);
+ 	sink_fmt->code = format->code;
+ 	mbus_info = rkisp1_isp_mbus_info_get(sink_fmt->code);
+-	if (!mbus_info) {
++	if (!mbus_info || !(mbus_info->direction & RKISP1_DIR_SINK)) {
+ 		sink_fmt->code = RKISP1_DEF_SINK_PAD_FMT;
+ 		mbus_info = rkisp1_isp_mbus_info_get(sink_fmt->code);
+ 	}
+diff --git a/drivers/staging/mt7621-pci/pci-mt7621.c b/drivers/staging/mt7621-pci/pci-mt7621.c
+index 3633c924848e..a1dafec0890a 100644
+--- a/drivers/staging/mt7621-pci/pci-mt7621.c
++++ b/drivers/staging/mt7621-pci/pci-mt7621.c
+@@ -485,7 +485,8 @@ static void mt7621_pcie_init_ports(struct mt7621_pcie *pcie)
+ 		if (!mt7621_pcie_port_is_linkup(port)) {
+ 			dev_err(dev, "pcie%d no card, disable it (RST & CLK)\n",
+ 				slot);
+-			phy_power_off(port->phy);
++			if (slot != 1)
++				phy_power_off(port->phy);
+ 			mt7621_control_assert(port);
+ 			mt7621_pcie_port_clk_disable(port);
+ 			port->enabled = false;
+diff --git a/drivers/staging/wilc1000/wlan.c b/drivers/staging/wilc1000/wlan.c
+index 601e4d1345d2..05b8adfe001d 100644
+--- a/drivers/staging/wilc1000/wlan.c
++++ b/drivers/staging/wilc1000/wlan.c
+@@ -572,7 +572,6 @@ int wilc_wlan_handle_txq(struct wilc *wilc, u32 *txq_count)
+ 				entries = ((reg >> 3) & 0x3f);
+ 				break;
+ 			}
+-			release_bus(wilc, WILC_BUS_RELEASE_ALLOW_SLEEP);
+ 		} while (--timeout);
+ 		if (timeout <= 0) {
+ 			ret = func->hif_write_reg(wilc, WILC_HOST_VMM_CTL, 0x0);
+diff --git a/drivers/thermal/intel/int340x_thermal/int3400_thermal.c b/drivers/thermal/intel/int340x_thermal/int3400_thermal.c
+index efae0c02d898..6cad15eb9cf4 100644
+--- a/drivers/thermal/intel/int340x_thermal/int3400_thermal.c
++++ b/drivers/thermal/intel/int340x_thermal/int3400_thermal.c
+@@ -369,8 +369,8 @@ static int int3400_thermal_remove(struct platform_device *pdev)
+ }
+ 
+ static const struct acpi_device_id int3400_thermal_match[] = {
+-	{"INT1040", 0},
+ 	{"INT3400", 0},
++	{"INTC1040", 0},
+ 	{}
+ };
+ 
+diff --git a/drivers/thermal/intel/int340x_thermal/int3403_thermal.c b/drivers/thermal/intel/int340x_thermal/int3403_thermal.c
+index aeece1e136a5..f86cbb125e2f 100644
+--- a/drivers/thermal/intel/int340x_thermal/int3403_thermal.c
++++ b/drivers/thermal/intel/int340x_thermal/int3403_thermal.c
+@@ -282,8 +282,8 @@ static int int3403_remove(struct platform_device *pdev)
+ }
+ 
+ static const struct acpi_device_id int3403_device_ids[] = {
+-	{"INT1043", 0},
+ 	{"INT3403", 0},
++	{"INTC1043", 0},
+ 	{"", 0},
+ };
+ MODULE_DEVICE_TABLE(acpi, int3403_device_ids);
+diff --git a/drivers/usb/dwc3/core.c b/drivers/usb/dwc3/core.c
+index 1d85c42b9c67..43bd5b1ea9e2 100644
+--- a/drivers/usb/dwc3/core.c
++++ b/drivers/usb/dwc3/core.c
+@@ -1029,6 +1029,9 @@ static int dwc3_core_init(struct dwc3 *dwc)
+ 		if (dwc->dis_tx_ipgap_linecheck_quirk)
+ 			reg |= DWC3_GUCTL1_TX_IPGAP_LINECHECK_DIS;
+ 
++		if (dwc->parkmode_disable_ss_quirk)
++			reg |= DWC3_GUCTL1_PARKMODE_DISABLE_SS;
++
+ 		dwc3_writel(dwc->regs, DWC3_GUCTL1, reg);
+ 	}
+ 
+@@ -1342,6 +1345,8 @@ static void dwc3_get_properties(struct dwc3 *dwc)
+ 				"snps,dis-del-phy-power-chg-quirk");
+ 	dwc->dis_tx_ipgap_linecheck_quirk = device_property_read_bool(dev,
+ 				"snps,dis-tx-ipgap-linecheck-quirk");
++	dwc->parkmode_disable_ss_quirk = device_property_read_bool(dev,
++				"snps,parkmode-disable-ss-quirk");
+ 
+ 	dwc->tx_de_emphasis_quirk = device_property_read_bool(dev,
+ 				"snps,tx_de_emphasis_quirk");
+diff --git a/drivers/usb/dwc3/core.h b/drivers/usb/dwc3/core.h
+index 77c4a9abe365..3ecc69c5b150 100644
+--- a/drivers/usb/dwc3/core.h
++++ b/drivers/usb/dwc3/core.h
+@@ -249,6 +249,7 @@
+ #define DWC3_GUCTL_HSTINAUTORETRY	BIT(14)
+ 
+ /* Global User Control 1 Register */
++#define DWC3_GUCTL1_PARKMODE_DISABLE_SS	BIT(17)
+ #define DWC3_GUCTL1_TX_IPGAP_LINECHECK_DIS	BIT(28)
+ #define DWC3_GUCTL1_DEV_L1_EXIT_BY_HW	BIT(24)
+ 
+@@ -1024,6 +1025,8 @@ struct dwc3_scratchpad_array {
+  *			change quirk.
+  * @dis_tx_ipgap_linecheck_quirk: set if we disable u2mac linestate
+  *			check during HS transmit.
++ * @parkmode_disable_ss_quirk: set if we need to disable all SuperSpeed
++ *			instances in park mode.
+  * @tx_de_emphasis_quirk: set if we enable Tx de-emphasis quirk
+  * @tx_de_emphasis: Tx de-emphasis value
+  * 	0	- -6dB de-emphasis
+@@ -1215,6 +1218,7 @@ struct dwc3 {
+ 	unsigned		dis_u2_freeclk_exists_quirk:1;
+ 	unsigned		dis_del_phy_power_chg_quirk:1;
+ 	unsigned		dis_tx_ipgap_linecheck_quirk:1;
++	unsigned		parkmode_disable_ss_quirk:1;
+ 
+ 	unsigned		tx_de_emphasis_quirk:1;
+ 	unsigned		tx_de_emphasis:2;
+diff --git a/drivers/usb/gadget/composite.c b/drivers/usb/gadget/composite.c
+index 223f72d4d9ed..cb4950cf1cdc 100644
+--- a/drivers/usb/gadget/composite.c
++++ b/drivers/usb/gadget/composite.c
+@@ -861,6 +861,11 @@ static int set_config(struct usb_composite_dev *cdev,
+ 	else
+ 		power = min(power, 900U);
+ done:
++	if (power <= USB_SELF_POWER_VBUS_MAX_DRAW)
++		usb_gadget_set_selfpowered(gadget);
++	else
++		usb_gadget_clear_selfpowered(gadget);
++
+ 	usb_gadget_vbus_draw(gadget, power);
+ 	if (result >= 0 && cdev->delayed_status)
+ 		result = USB_GADGET_DELAYED_STATUS;
+@@ -2279,6 +2284,7 @@ void composite_suspend(struct usb_gadget *gadget)
+ 
+ 	cdev->suspended = 1;
+ 
++	usb_gadget_set_selfpowered(gadget);
+ 	usb_gadget_vbus_draw(gadget, 2);
+ }
+ 
+@@ -2307,6 +2313,9 @@ void composite_resume(struct usb_gadget *gadget)
+ 		else
+ 			maxpower = min(maxpower, 900U);
+ 
++		if (maxpower > USB_SELF_POWER_VBUS_MAX_DRAW)
++			usb_gadget_clear_selfpowered(gadget);
++
+ 		usb_gadget_vbus_draw(gadget, maxpower);
+ 	}
+ 
+diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c
+index 571917677d35..767f30b86645 100644
+--- a/drivers/usb/gadget/function/f_fs.c
++++ b/drivers/usb/gadget/function/f_fs.c
+@@ -1120,6 +1120,7 @@ static ssize_t ffs_epfile_io(struct file *file, struct ffs_io_data *io_data)
+ 
+ 		ret = usb_ep_queue(ep->ep, req, GFP_ATOMIC);
+ 		if (unlikely(ret)) {
++			io_data->req = NULL;
+ 			usb_ep_free_request(ep->ep, req);
+ 			goto error_lock;
+ 		}
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index dbac0fa9748d..fe38275363e0 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -1157,8 +1157,10 @@ int xhci_resume(struct xhci_hcd *xhci, bool hibernated)
+ 		xhci_dbg(xhci, "Stop HCD\n");
+ 		xhci_halt(xhci);
+ 		xhci_zero_64b_regs(xhci);
+-		xhci_reset(xhci);
++		retval = xhci_reset(xhci);
+ 		spin_unlock_irq(&xhci->lock);
++		if (retval)
++			return retval;
+ 		xhci_cleanup_msix(xhci);
+ 
+ 		xhci_dbg(xhci, "// Disabling event ring interrupts\n");
+diff --git a/drivers/usb/phy/phy-tegra-usb.c b/drivers/usb/phy/phy-tegra-usb.c
+index 6153cc35aba0..cffe2aced488 100644
+--- a/drivers/usb/phy/phy-tegra-usb.c
++++ b/drivers/usb/phy/phy-tegra-usb.c
+@@ -12,12 +12,11 @@
+ #include <linux/delay.h>
+ #include <linux/err.h>
+ #include <linux/export.h>
+-#include <linux/gpio.h>
++#include <linux/gpio/consumer.h>
+ #include <linux/iopoll.h>
+ #include <linux/module.h>
+ #include <linux/of.h>
+ #include <linux/of_device.h>
+-#include <linux/of_gpio.h>
+ #include <linux/platform_device.h>
+ #include <linux/resource.h>
+ #include <linux/slab.h>
+diff --git a/drivers/usb/typec/ucsi/ucsi_ccg.c b/drivers/usb/typec/ucsi/ucsi_ccg.c
+index a5b8530490db..2658cda5da11 100644
+--- a/drivers/usb/typec/ucsi/ucsi_ccg.c
++++ b/drivers/usb/typec/ucsi/ucsi_ccg.c
+@@ -1219,6 +1219,7 @@ static int ccg_restart(struct ucsi_ccg *uc)
+ 		return status;
+ 	}
+ 
++	pm_runtime_enable(uc->dev);
+ 	return 0;
+ }
+ 
+@@ -1234,6 +1235,7 @@ static void ccg_update_firmware(struct work_struct *work)
+ 
+ 	if (flash_mode != FLASH_NOT_NEEDED) {
+ 		ucsi_unregister(uc->ucsi);
++		pm_runtime_disable(uc->dev);
+ 		free_irq(uc->irq, uc);
+ 
+ 		ccg_fw_update(uc, flash_mode);
+diff --git a/drivers/vfio/platform/vfio_platform.c b/drivers/vfio/platform/vfio_platform.c
+index ae1a5eb98620..1e2769010089 100644
+--- a/drivers/vfio/platform/vfio_platform.c
++++ b/drivers/vfio/platform/vfio_platform.c
+@@ -44,7 +44,7 @@ static int get_platform_irq(struct vfio_platform_device *vdev, int i)
+ {
+ 	struct platform_device *pdev = (struct platform_device *) vdev->opaque;
+ 
+-	return platform_get_irq(pdev, i);
++	return platform_get_irq_optional(pdev, i);
+ }
+ 
+ static int vfio_platform_probe(struct platform_device *pdev)
+diff --git a/fs/btrfs/async-thread.c b/fs/btrfs/async-thread.c
+index 1d32a07bb2d1..309516e6a968 100644
+--- a/fs/btrfs/async-thread.c
++++ b/fs/btrfs/async-thread.c
+@@ -395,3 +395,11 @@ void btrfs_set_work_high_priority(struct btrfs_work *work)
+ {
+ 	set_bit(WORK_HIGH_PRIO_BIT, &work->flags);
+ }
++
++void btrfs_flush_workqueue(struct btrfs_workqueue *wq)
++{
++	if (wq->high)
++		flush_workqueue(wq->high->normal_wq);
++
++	flush_workqueue(wq->normal->normal_wq);
++}
+diff --git a/fs/btrfs/async-thread.h b/fs/btrfs/async-thread.h
+index a4434301d84d..3204daa51b95 100644
+--- a/fs/btrfs/async-thread.h
++++ b/fs/btrfs/async-thread.h
+@@ -44,5 +44,6 @@ void btrfs_set_work_high_priority(struct btrfs_work *work);
+ struct btrfs_fs_info * __pure btrfs_work_owner(const struct btrfs_work *work);
+ struct btrfs_fs_info * __pure btrfs_workqueue_owner(const struct __btrfs_workqueue *wq);
+ bool btrfs_workqueue_normal_congested(const struct btrfs_workqueue *wq);
++void btrfs_flush_workqueue(struct btrfs_workqueue *wq);
+ 
+ #endif
+diff --git a/fs/btrfs/delayed-inode.c b/fs/btrfs/delayed-inode.c
+index d3e15e1d4a91..18509746208b 100644
+--- a/fs/btrfs/delayed-inode.c
++++ b/fs/btrfs/delayed-inode.c
+@@ -6,6 +6,7 @@
+ 
+ #include <linux/slab.h>
+ #include <linux/iversion.h>
++#include <linux/sched/mm.h>
+ #include "misc.h"
+ #include "delayed-inode.h"
+ #include "disk-io.h"
+@@ -805,11 +806,14 @@ static int btrfs_insert_delayed_item(struct btrfs_trans_handle *trans,
+ 				     struct btrfs_delayed_item *delayed_item)
+ {
+ 	struct extent_buffer *leaf;
++	unsigned int nofs_flag;
+ 	char *ptr;
+ 	int ret;
+ 
++	nofs_flag = memalloc_nofs_save();
+ 	ret = btrfs_insert_empty_item(trans, root, path, &delayed_item->key,
+ 				      delayed_item->data_len);
++	memalloc_nofs_restore(nofs_flag);
+ 	if (ret < 0 && ret != -EEXIST)
+ 		return ret;
+ 
+@@ -937,6 +941,7 @@ static int btrfs_delete_delayed_items(struct btrfs_trans_handle *trans,
+ 				      struct btrfs_delayed_node *node)
+ {
+ 	struct btrfs_delayed_item *curr, *prev;
++	unsigned int nofs_flag;
+ 	int ret = 0;
+ 
+ do_again:
+@@ -945,7 +950,9 @@ do_again:
+ 	if (!curr)
+ 		goto delete_fail;
+ 
++	nofs_flag = memalloc_nofs_save();
+ 	ret = btrfs_search_slot(trans, root, &curr->key, path, -1, 1);
++	memalloc_nofs_restore(nofs_flag);
+ 	if (ret < 0)
+ 		goto delete_fail;
+ 	else if (ret > 0) {
+@@ -1012,6 +1019,7 @@ static int __btrfs_update_delayed_inode(struct btrfs_trans_handle *trans,
+ 	struct btrfs_key key;
+ 	struct btrfs_inode_item *inode_item;
+ 	struct extent_buffer *leaf;
++	unsigned int nofs_flag;
+ 	int mod;
+ 	int ret;
+ 
+@@ -1024,7 +1032,9 @@ static int __btrfs_update_delayed_inode(struct btrfs_trans_handle *trans,
+ 	else
+ 		mod = 1;
+ 
++	nofs_flag = memalloc_nofs_save();
+ 	ret = btrfs_lookup_inode(trans, root, path, &key, mod);
++	memalloc_nofs_restore(nofs_flag);
+ 	if (ret > 0) {
+ 		btrfs_release_path(path);
+ 		return -ENOENT;
+@@ -1075,7 +1085,10 @@ search:
+ 
+ 	key.type = BTRFS_INODE_EXTREF_KEY;
+ 	key.offset = -1;
++
++	nofs_flag = memalloc_nofs_save();
+ 	ret = btrfs_search_slot(trans, root, &key, path, -1, 1);
++	memalloc_nofs_restore(nofs_flag);
+ 	if (ret < 0)
+ 		goto err_out;
+ 	ASSERT(ret);
+diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
+index c6c9a6a8e6c8..29fc96dfa508 100644
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -3061,6 +3061,18 @@ int __cold open_ctree(struct super_block *sb,
+ 	if (ret)
+ 		goto fail_tree_roots;
+ 
++	/*
++	 * If we have a uuid root and we're not being told to rescan we need to
++	 * check the generation here so we can set the
++	 * BTRFS_FS_UPDATE_UUID_TREE_GEN bit.  Otherwise we could commit the
++	 * transaction during a balance or the log replay without updating the
++	 * uuid generation, and then if we crash we would rescan the uuid tree,
++	 * even though it was perfectly fine.
++	 */
++	if (fs_info->uuid_root && !btrfs_test_opt(fs_info, RESCAN_UUID_TREE) &&
++	    fs_info->generation == btrfs_super_uuid_tree_generation(disk_super))
++		set_bit(BTRFS_FS_UPDATE_UUID_TREE_GEN, &fs_info->flags);
++
+ 	ret = btrfs_verify_dev_extents(fs_info);
+ 	if (ret) {
+ 		btrfs_err(fs_info,
+@@ -3285,8 +3297,6 @@ int __cold open_ctree(struct super_block *sb,
+ 			close_ctree(fs_info);
+ 			return ret;
+ 		}
+-	} else {
+-		set_bit(BTRFS_FS_UPDATE_UUID_TREE_GEN, &fs_info->flags);
+ 	}
+ 	set_bit(BTRFS_FS_OPEN, &fs_info->flags);
+ 
+@@ -3990,6 +4000,19 @@ void __cold close_ctree(struct btrfs_fs_info *fs_info)
+ 		 */
+ 		btrfs_delete_unused_bgs(fs_info);
+ 
++		/*
++		 * There might be existing delayed inode workers still running
++		 * and holding an empty delayed inode item. We must wait for
++		 * them to complete first because they can create a transaction.
++		 * This happens when someone calls btrfs_balance_delayed_items()
++		 * and then a transaction commit runs the same delayed nodes
++		 * before any delayed worker has done something with the nodes.
++		 * We must wait for any worker here and not at transaction
++		 * commit time since that could cause a deadlock.
++		 * This is a very rare case.
++		 */
++		btrfs_flush_workqueue(fs_info->delayed_workers);
++
+ 		ret = btrfs_commit_super(fs_info);
+ 		if (ret)
+ 			btrfs_err(fs_info, "commit super ret %d", ret);
+diff --git a/fs/btrfs/extent_io.c b/fs/btrfs/extent_io.c
+index c0f202741e09..a4128dedbbf4 100644
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -3905,6 +3905,7 @@ int btree_write_cache_pages(struct address_space *mapping,
+ 		.extent_locked = 0,
+ 		.sync_io = wbc->sync_mode == WB_SYNC_ALL,
+ 	};
++	struct btrfs_fs_info *fs_info = BTRFS_I(mapping->host)->root->fs_info;
+ 	int ret = 0;
+ 	int done = 0;
+ 	int nr_to_write_done = 0;
+@@ -4018,7 +4019,39 @@ retry:
+ 		end_write_bio(&epd, ret);
+ 		return ret;
+ 	}
+-	ret = flush_write_bio(&epd);
++	/*
++	 * If something went wrong, don't allow any metadata write bio to be
++	 * submitted.
++	 *
++	 * This would prevent use-after-free if we had dirty pages not
++	 * cleaned up, which can still happen by fuzzed images.
++	 *
++	 * - Bad extent tree
++	 *   Allowing existing tree block to be allocated for other trees.
++	 *
++	 * - Log tree operations
++	 *   Exiting tree blocks get allocated to log tree, bumps its
++	 *   generation, then get cleaned in tree re-balance.
++	 *   Such tree block will not be written back, since it's clean,
++	 *   thus no WRITTEN flag set.
++	 *   And after log writes back, this tree block is not traced by
++	 *   any dirty extent_io_tree.
++	 *
++	 * - Offending tree block gets re-dirtied from its original owner
++	 *   Since it has bumped generation, no WRITTEN flag, it can be
++	 *   reused without COWing. This tree block will not be traced
++	 *   by btrfs_transaction::dirty_pages.
++	 *
++	 *   Now such dirty tree block will not be cleaned by any dirty
++	 *   extent io tree. Thus we don't want to submit such wild eb
++	 *   if the fs already has error.
++	 */
++	if (!test_bit(BTRFS_FS_STATE_ERROR, &fs_info->fs_state)) {
++		ret = flush_write_bio(&epd);
++	} else {
++		ret = -EUCLEAN;
++		end_write_bio(&epd, ret);
++	}
+ 	return ret;
+ }
+ 
+diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c
+index a16da274c9aa..3aa31bd7d4ad 100644
+--- a/fs/btrfs/file.c
++++ b/fs/btrfs/file.c
+@@ -2070,6 +2070,16 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
+ 
+ 	btrfs_init_log_ctx(&ctx, inode);
+ 
++	/*
++	 * Set the range to full if the NO_HOLES feature is not enabled.
++	 * This is to avoid missing file extent items representing holes after
++	 * replaying the log.
++	 */
++	if (!btrfs_fs_incompat(fs_info, NO_HOLES)) {
++		start = 0;
++		end = LLONG_MAX;
++	}
++
+ 	/*
+ 	 * We write the dirty pages in the range and wait until they complete
+ 	 * out of the ->i_mutex. If so, we can flush the dirty pages by
+@@ -2124,6 +2134,7 @@ int btrfs_sync_file(struct file *file, loff_t start, loff_t end, int datasync)
+ 	 */
+ 	ret = start_ordered_ops(inode, start, end);
+ 	if (ret) {
++		up_write(&BTRFS_I(inode)->dio_sem);
+ 		inode_unlock(inode);
+ 		goto out;
+ 	}
+diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
+index ff1870ff3474..afc9752e984c 100644
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -1030,6 +1030,7 @@ out_add_root:
+ 	ret = qgroup_rescan_init(fs_info, 0, 1);
+ 	if (!ret) {
+ 	        qgroup_rescan_zero_tracking(fs_info);
++		fs_info->qgroup_rescan_running = true;
+ 	        btrfs_queue_work(fs_info->qgroup_rescan_workers,
+ 	                         &fs_info->qgroup_rescan_work);
+ 	}
+@@ -3263,7 +3264,6 @@ qgroup_rescan_init(struct btrfs_fs_info *fs_info, u64 progress_objectid,
+ 		sizeof(fs_info->qgroup_rescan_progress));
+ 	fs_info->qgroup_rescan_progress.objectid = progress_objectid;
+ 	init_completion(&fs_info->qgroup_rescan_completion);
+-	fs_info->qgroup_rescan_running = true;
+ 
+ 	spin_unlock(&fs_info->qgroup_lock);
+ 	mutex_unlock(&fs_info->qgroup_rescan_lock);
+@@ -3326,8 +3326,11 @@ btrfs_qgroup_rescan(struct btrfs_fs_info *fs_info)
+ 
+ 	qgroup_rescan_zero_tracking(fs_info);
+ 
++	mutex_lock(&fs_info->qgroup_rescan_lock);
++	fs_info->qgroup_rescan_running = true;
+ 	btrfs_queue_work(fs_info->qgroup_rescan_workers,
+ 			 &fs_info->qgroup_rescan_work);
++	mutex_unlock(&fs_info->qgroup_rescan_lock);
+ 
+ 	return 0;
+ }
+@@ -3363,9 +3366,13 @@ int btrfs_qgroup_wait_for_completion(struct btrfs_fs_info *fs_info,
+ void
+ btrfs_qgroup_rescan_resume(struct btrfs_fs_info *fs_info)
+ {
+-	if (fs_info->qgroup_flags & BTRFS_QGROUP_STATUS_FLAG_RESCAN)
++	if (fs_info->qgroup_flags & BTRFS_QGROUP_STATUS_FLAG_RESCAN) {
++		mutex_lock(&fs_info->qgroup_rescan_lock);
++		fs_info->qgroup_rescan_running = true;
+ 		btrfs_queue_work(fs_info->qgroup_rescan_workers,
+ 				 &fs_info->qgroup_rescan_work);
++		mutex_unlock(&fs_info->qgroup_rescan_lock);
++	}
+ }
+ 
+ /*
+diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
+index 995d4b8b1cfd..4bb0f9e4f3f4 100644
+--- a/fs/btrfs/relocation.c
++++ b/fs/btrfs/relocation.c
+@@ -1186,7 +1186,7 @@ out:
+ 			free_backref_node(cache, lower);
+ 		}
+ 
+-		free_backref_node(cache, node);
++		remove_backref_node(cache, node);
+ 		return ERR_PTR(err);
+ 	}
+ 	ASSERT(!node || !node->detached);
+@@ -1298,7 +1298,7 @@ static int __must_check __add_reloc_root(struct btrfs_root *root)
+ 	if (!node)
+ 		return -ENOMEM;
+ 
+-	node->bytenr = root->node->start;
++	node->bytenr = root->commit_root->start;
+ 	node->data = root;
+ 
+ 	spin_lock(&rc->reloc_root_tree.lock);
+@@ -1329,10 +1329,11 @@ static void __del_reloc_root(struct btrfs_root *root)
+ 	if (rc && root->node) {
+ 		spin_lock(&rc->reloc_root_tree.lock);
+ 		rb_node = tree_search(&rc->reloc_root_tree.rb_root,
+-				      root->node->start);
++				      root->commit_root->start);
+ 		if (rb_node) {
+ 			node = rb_entry(rb_node, struct mapping_node, rb_node);
+ 			rb_erase(&node->rb_node, &rc->reloc_root_tree.rb_root);
++			RB_CLEAR_NODE(&node->rb_node);
+ 		}
+ 		spin_unlock(&rc->reloc_root_tree.lock);
+ 		if (!node)
+@@ -1350,7 +1351,7 @@ static void __del_reloc_root(struct btrfs_root *root)
+  * helper to update the 'address of tree root -> reloc tree'
+  * mapping
+  */
+-static int __update_reloc_root(struct btrfs_root *root, u64 new_bytenr)
++static int __update_reloc_root(struct btrfs_root *root)
+ {
+ 	struct btrfs_fs_info *fs_info = root->fs_info;
+ 	struct rb_node *rb_node;
+@@ -1359,7 +1360,7 @@ static int __update_reloc_root(struct btrfs_root *root, u64 new_bytenr)
+ 
+ 	spin_lock(&rc->reloc_root_tree.lock);
+ 	rb_node = tree_search(&rc->reloc_root_tree.rb_root,
+-			      root->node->start);
++			      root->commit_root->start);
+ 	if (rb_node) {
+ 		node = rb_entry(rb_node, struct mapping_node, rb_node);
+ 		rb_erase(&node->rb_node, &rc->reloc_root_tree.rb_root);
+@@ -1371,7 +1372,7 @@ static int __update_reloc_root(struct btrfs_root *root, u64 new_bytenr)
+ 	BUG_ON((struct btrfs_root *)node->data != root);
+ 
+ 	spin_lock(&rc->reloc_root_tree.lock);
+-	node->bytenr = new_bytenr;
++	node->bytenr = root->node->start;
+ 	rb_node = tree_insert(&rc->reloc_root_tree.rb_root,
+ 			      node->bytenr, &node->rb_node);
+ 	spin_unlock(&rc->reloc_root_tree.lock);
+@@ -1529,6 +1530,7 @@ int btrfs_update_reloc_root(struct btrfs_trans_handle *trans,
+ 	}
+ 
+ 	if (reloc_root->commit_root != reloc_root->node) {
++		__update_reloc_root(reloc_root);
+ 		btrfs_set_root_node(root_item, reloc_root->node);
+ 		free_extent_buffer(reloc_root->commit_root);
+ 		reloc_root->commit_root = btrfs_root_node(reloc_root);
+@@ -2561,7 +2563,21 @@ out:
+ 			free_reloc_roots(&reloc_roots);
+ 	}
+ 
+-	BUG_ON(!RB_EMPTY_ROOT(&rc->reloc_root_tree.rb_root));
++	/*
++	 * We used to have
++	 *
++	 * BUG_ON(!RB_EMPTY_ROOT(&rc->reloc_root_tree.rb_root));
++	 *
++	 * here, but it's wrong.  If we fail to start the transaction in
++	 * prepare_to_merge() we will have only 0 ref reloc roots, none of which
++	 * have actually been removed from the reloc_root_tree rb tree.  This is
++	 * fine because we're bailing here, and we hold a reference on the root
++	 * for the list that holds it, so these roots will be cleaned up when we
++	 * do the reloc_dirty_list afterwards.  Meanwhile the root->reloc_root
++	 * will be cleaned up on unmount.
++	 *
++	 * The remaining nodes will be cleaned up by free_reloc_control.
++	 */
+ }
+ 
+ static void free_block_list(struct rb_root *blocks)
+@@ -3161,9 +3177,8 @@ int relocate_tree_blocks(struct btrfs_trans_handle *trans,
+ 		ret = relocate_tree_block(trans, rc, node, &block->key,
+ 					  path);
+ 		if (ret < 0) {
+-			if (ret != -EAGAIN || &block->rb_node == rb_first(blocks))
+-				err = ret;
+-			goto out;
++			err = ret;
++			break;
+ 		}
+ 	}
+ out:
+@@ -4137,12 +4152,6 @@ restart:
+ 		if (!RB_EMPTY_ROOT(&blocks)) {
+ 			ret = relocate_tree_blocks(trans, rc, &blocks);
+ 			if (ret < 0) {
+-				/*
+-				 * if we fail to relocate tree blocks, force to update
+-				 * backref cache when committing transaction.
+-				 */
+-				rc->backref_cache.last_trans = trans->transid - 1;
+-
+ 				if (ret != -EAGAIN) {
+ 					err = ret;
+ 					break;
+@@ -4212,10 +4221,10 @@ restart:
+ 		goto out_free;
+ 	}
+ 	btrfs_commit_transaction(trans);
++out_free:
+ 	ret = clean_dirty_subvols(rc);
+ 	if (ret < 0 && !err)
+ 		err = ret;
+-out_free:
+ 	btrfs_free_block_rsv(fs_info, rc->block_rsv);
+ 	btrfs_free_path(path);
+ 	return err;
+@@ -4584,9 +4593,8 @@ int btrfs_recover_relocation(struct btrfs_root *root)
+ 
+ 	trans = btrfs_join_transaction(rc->extent_root);
+ 	if (IS_ERR(trans)) {
+-		unset_reloc_control(rc);
+ 		err = PTR_ERR(trans);
+-		goto out_free;
++		goto out_unset;
+ 	}
+ 
+ 	rc->merge_reloc_tree = 1;
+@@ -4606,7 +4614,7 @@ int btrfs_recover_relocation(struct btrfs_root *root)
+ 		if (IS_ERR(fs_root)) {
+ 			err = PTR_ERR(fs_root);
+ 			list_add_tail(&reloc_root->root_list, &reloc_roots);
+-			goto out_free;
++			goto out_unset;
+ 		}
+ 
+ 		err = __add_reloc_root(reloc_root);
+@@ -4616,7 +4624,7 @@ int btrfs_recover_relocation(struct btrfs_root *root)
+ 
+ 	err = btrfs_commit_transaction(trans);
+ 	if (err)
+-		goto out_free;
++		goto out_unset;
+ 
+ 	merge_reloc_roots(rc);
+ 
+@@ -4625,14 +4633,15 @@ int btrfs_recover_relocation(struct btrfs_root *root)
+ 	trans = btrfs_join_transaction(rc->extent_root);
+ 	if (IS_ERR(trans)) {
+ 		err = PTR_ERR(trans);
+-		goto out_free;
++		goto out_clean;
+ 	}
+ 	err = btrfs_commit_transaction(trans);
+-
++out_clean:
+ 	ret = clean_dirty_subvols(rc);
+ 	if (ret < 0 && !err)
+ 		err = ret;
+-out_free:
++out_unset:
++	unset_reloc_control(rc);
+ 	kfree(rc);
+ out:
+ 	if (!list_empty(&reloc_roots))
+@@ -4720,11 +4729,6 @@ int btrfs_reloc_cow_block(struct btrfs_trans_handle *trans,
+ 	BUG_ON(rc->stage == UPDATE_DATA_PTRS &&
+ 	       root->root_key.objectid == BTRFS_DATA_RELOC_TREE_OBJECTID);
+ 
+-	if (root->root_key.objectid == BTRFS_TREE_RELOC_OBJECTID) {
+-		if (buf == root->node)
+-			__update_reloc_root(root, cow->start);
+-	}
+-
+ 	level = btrfs_header_level(buf);
+ 	if (btrfs_header_generation(buf) <=
+ 	    btrfs_root_last_snapshot(&root->root_item))
+diff --git a/fs/btrfs/space-info.c b/fs/btrfs/space-info.c
+index 01297c5b2666..8c03f6737882 100644
+--- a/fs/btrfs/space-info.c
++++ b/fs/btrfs/space-info.c
+@@ -159,25 +159,19 @@ static inline u64 calc_global_rsv_need_space(struct btrfs_block_rsv *global)
+ 	return (global->size << 1);
+ }
+ 
+-int btrfs_can_overcommit(struct btrfs_fs_info *fs_info,
+-			 struct btrfs_space_info *space_info, u64 bytes,
+-			 enum btrfs_reserve_flush_enum flush)
++static u64 calc_available_free_space(struct btrfs_fs_info *fs_info,
++			  struct btrfs_space_info *space_info,
++			  enum btrfs_reserve_flush_enum flush)
+ {
+ 	u64 profile;
+ 	u64 avail;
+-	u64 used;
+ 	int factor;
+ 
+-	/* Don't overcommit when in mixed mode. */
+-	if (space_info->flags & BTRFS_BLOCK_GROUP_DATA)
+-		return 0;
+-
+ 	if (space_info->flags & BTRFS_BLOCK_GROUP_SYSTEM)
+ 		profile = btrfs_system_alloc_profile(fs_info);
+ 	else
+ 		profile = btrfs_metadata_alloc_profile(fs_info);
+ 
+-	used = btrfs_space_info_used(space_info, true);
+ 	avail = atomic64_read(&fs_info->free_chunk_space);
+ 
+ 	/*
+@@ -198,6 +192,22 @@ int btrfs_can_overcommit(struct btrfs_fs_info *fs_info,
+ 		avail >>= 3;
+ 	else
+ 		avail >>= 1;
++	return avail;
++}
++
++int btrfs_can_overcommit(struct btrfs_fs_info *fs_info,
++			 struct btrfs_space_info *space_info, u64 bytes,
++			 enum btrfs_reserve_flush_enum flush)
++{
++	u64 avail;
++	u64 used;
++
++	/* Don't overcommit when in mixed mode */
++	if (space_info->flags & BTRFS_BLOCK_GROUP_DATA)
++		return 0;
++
++	used = btrfs_space_info_used(space_info, true);
++	avail = calc_available_free_space(fs_info, space_info, flush);
+ 
+ 	if (used + bytes < space_info->total_bytes + avail)
+ 		return 1;
+@@ -629,6 +639,7 @@ btrfs_calc_reclaim_metadata_size(struct btrfs_fs_info *fs_info,
+ {
+ 	struct reserve_ticket *ticket;
+ 	u64 used;
++	u64 avail;
+ 	u64 expected;
+ 	u64 to_reclaim = 0;
+ 
+@@ -636,6 +647,20 @@ btrfs_calc_reclaim_metadata_size(struct btrfs_fs_info *fs_info,
+ 		to_reclaim += ticket->bytes;
+ 	list_for_each_entry(ticket, &space_info->priority_tickets, list)
+ 		to_reclaim += ticket->bytes;
++
++	avail = calc_available_free_space(fs_info, space_info,
++					  BTRFS_RESERVE_FLUSH_ALL);
++	used = btrfs_space_info_used(space_info, true);
++
++	/*
++	 * We may be flushing because suddenly we have less space than we had
++	 * before, and now we're well over-committed based on our current free
++	 * space.  If that's the case add in our overage so we make sure to put
++	 * appropriate pressure on the flushing state machine.
++	 */
++	if (space_info->total_bytes + avail < used)
++		to_reclaim += used - (space_info->total_bytes + avail);
++
+ 	if (to_reclaim)
+ 		return to_reclaim;
+ 
+diff --git a/fs/cifs/file.c b/fs/cifs/file.c
+index 8f9d849a0012..5920820bfbd0 100644
+--- a/fs/cifs/file.c
++++ b/fs/cifs/file.c
+@@ -3841,7 +3841,7 @@ again:
+ 	if (rc == -ENODATA)
+ 		rc = 0;
+ 
+-	ctx->rc = (rc == 0) ? ctx->total_len : rc;
++	ctx->rc = (rc == 0) ? (ssize_t)ctx->total_len : rc;
+ 
+ 	mutex_unlock(&ctx->aio_mutex);
+ 
+diff --git a/fs/cifs/inode.c b/fs/cifs/inode.c
+index b16f8d23e97b..9458b1582342 100644
+--- a/fs/cifs/inode.c
++++ b/fs/cifs/inode.c
+@@ -2516,25 +2516,26 @@ cifs_setattr_nounix(struct dentry *direntry, struct iattr *attrs)
+ 
+ 	/*
+ 	 * Attempt to flush data before changing attributes. We need to do
+-	 * this for ATTR_SIZE and ATTR_MTIME for sure, and if we change the
+-	 * ownership or mode then we may also need to do this. Here, we take
+-	 * the safe way out and just do the flush on all setattr requests. If
+-	 * the flush returns error, store it to report later and continue.
++	 * this for ATTR_SIZE and ATTR_MTIME.  If the flush of the data
++	 * returns error, store it to report later and continue.
+ 	 *
+ 	 * BB: This should be smarter. Why bother flushing pages that
+ 	 * will be truncated anyway? Also, should we error out here if
+-	 * the flush returns error?
++	 * the flush returns error? Do we need to check for ATTR_MTIME_SET flag?
+ 	 */
+-	rc = filemap_write_and_wait(inode->i_mapping);
+-	if (is_interrupt_error(rc)) {
+-		rc = -ERESTARTSYS;
+-		goto cifs_setattr_exit;
++	if (attrs->ia_valid & (ATTR_MTIME | ATTR_SIZE | ATTR_CTIME)) {
++		rc = filemap_write_and_wait(inode->i_mapping);
++		if (is_interrupt_error(rc)) {
++			rc = -ERESTARTSYS;
++			goto cifs_setattr_exit;
++		}
++		mapping_set_error(inode->i_mapping, rc);
+ 	}
+ 
+-	mapping_set_error(inode->i_mapping, rc);
+ 	rc = 0;
+ 
+-	if (attrs->ia_valid & ATTR_MTIME) {
++	if ((attrs->ia_valid & ATTR_MTIME) &&
++	    !(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_NOSSYNC)) {
+ 		rc = cifs_get_writable_file(cifsInode, FIND_WR_ANY, &wfile);
+ 		if (!rc) {
+ 			tcon = tlink_tcon(wfile->tlink);
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index cfe9b800ea8c..788344b5949e 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -3248,6 +3248,10 @@ static long smb3_simple_falloc(struct file *file, struct cifs_tcon *tcon,
+ 	 * Extending the file
+ 	 */
+ 	if ((keep_size == false) && i_size_read(inode) < off + len) {
++		rc = inode_newsize_ok(inode, off + len);
++		if (rc)
++			goto out;
++
+ 		if ((cifsi->cifsAttrs & FILE_ATTRIBUTE_SPARSE_FILE) == 0)
+ 			smb2_set_sparse(xid, tcon, cfile, inode, false);
+ 
+diff --git a/fs/debugfs/file.c b/fs/debugfs/file.c
+index db987b5110a9..f34757e8f25f 100644
+--- a/fs/debugfs/file.c
++++ b/fs/debugfs/file.c
+@@ -175,8 +175,13 @@ static int open_proxy_open(struct inode *inode, struct file *filp)
+ 	if (r)
+ 		goto out;
+ 
+-	real_fops = fops_get(real_fops);
+-	if (!real_fops) {
++	if (!fops_get(real_fops)) {
++#ifdef MODULE
++		if (real_fops->owner &&
++		    real_fops->owner->state == MODULE_STATE_GOING)
++			goto out;
++#endif
++
+ 		/* Huh? Module did not clean up after itself at exit? */
+ 		WARN(1, "debugfs file owner did not clean up at exit: %pd",
+ 			dentry);
+@@ -305,8 +310,13 @@ static int full_proxy_open(struct inode *inode, struct file *filp)
+ 	if (r)
+ 		goto out;
+ 
+-	real_fops = fops_get(real_fops);
+-	if (!real_fops) {
++	if (!fops_get(real_fops)) {
++#ifdef MODULE
++		if (real_fops->owner &&
++		    real_fops->owner->state == MODULE_STATE_GOING)
++			goto out;
++#endif
++
+ 		/* Huh? Module did not cleanup after itself at exit? */
+ 		WARN(1, "debugfs file owner did not clean up at exit: %pd",
+ 			dentry);
+diff --git a/fs/erofs/utils.c b/fs/erofs/utils.c
+index fddc5059c930..df42ea552a44 100644
+--- a/fs/erofs/utils.c
++++ b/fs/erofs/utils.c
+@@ -286,7 +286,7 @@ static unsigned long erofs_shrink_scan(struct shrinker *shrink,
+ 		spin_unlock(&erofs_sb_list_lock);
+ 		sbi->shrinker_run_no = run_no;
+ 
+-		freed += erofs_shrink_workstation(sbi, nr);
++		freed += erofs_shrink_workstation(sbi, nr - freed);
+ 
+ 		spin_lock(&erofs_sb_list_lock);
+ 		/* Get the next list element before we move this one */
+diff --git a/fs/exec.c b/fs/exec.c
+index db17be51b112..a58625f27652 100644
+--- a/fs/exec.c
++++ b/fs/exec.c
+@@ -1386,7 +1386,7 @@ void setup_new_exec(struct linux_binprm * bprm)
+ 
+ 	/* An exec changes our domain. We are no longer part of the thread
+ 	   group */
+-	current->self_exec_id++;
++	WRITE_ONCE(current->self_exec_id, current->self_exec_id + 1);
+ 	flush_signal_handlers(current, 0);
+ }
+ EXPORT_SYMBOL(setup_new_exec);
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index fa0ff78dc033..c5d05564cd29 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -4812,7 +4812,7 @@ static int ext4_inode_blocks_set(handle_t *handle,
+ 				struct ext4_inode_info *ei)
+ {
+ 	struct inode *inode = &(ei->vfs_inode);
+-	u64 i_blocks = inode->i_blocks;
++	u64 i_blocks = READ_ONCE(inode->i_blocks);
+ 	struct super_block *sb = inode->i_sb;
+ 
+ 	if (i_blocks <= ~0U) {
+diff --git a/fs/filesystems.c b/fs/filesystems.c
+index 77bf5f95362d..90b8d879fbaf 100644
+--- a/fs/filesystems.c
++++ b/fs/filesystems.c
+@@ -272,7 +272,9 @@ struct file_system_type *get_fs_type(const char *name)
+ 	fs = __get_fs_type(name, len);
+ 	if (!fs && (request_module("fs-%.*s", len, name) == 0)) {
+ 		fs = __get_fs_type(name, len);
+-		WARN_ONCE(!fs, "request_module fs-%.*s succeeded, but still no fs?\n", len, name);
++		if (!fs)
++			pr_warn_once("request_module fs-%.*s succeeded, but still no fs?\n",
++				     len, name);
+ 	}
+ 
+ 	if (dot && fs && !(fs->fs_flags & FS_HAS_SUBTYPE)) {
+diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c
+index d0eceaff3cea..19ebc6cd0f2b 100644
+--- a/fs/gfs2/glock.c
++++ b/fs/gfs2/glock.c
+@@ -645,6 +645,9 @@ __acquires(&gl->gl_lockref.lock)
+ 			goto out_unlock;
+ 		if (nonblock)
+ 			goto out_sched;
++		smp_mb();
++		if (atomic_read(&gl->gl_revokes) != 0)
++			goto out_sched;
+ 		set_bit(GLF_DEMOTE_IN_PROGRESS, &gl->gl_flags);
+ 		GLOCK_BUG_ON(gl, gl->gl_demote_state == LM_ST_EXCLUSIVE);
+ 		gl->gl_target = gl->gl_demote_state;
+diff --git a/fs/gfs2/glops.c b/fs/gfs2/glops.c
+index 061d22e1ceb6..efc899a3876b 100644
+--- a/fs/gfs2/glops.c
++++ b/fs/gfs2/glops.c
+@@ -89,8 +89,32 @@ static void gfs2_ail_empty_gl(struct gfs2_glock *gl)
+ 	INIT_LIST_HEAD(&tr.tr_databuf);
+ 	tr.tr_revokes = atomic_read(&gl->gl_ail_count);
+ 
+-	if (!tr.tr_revokes)
++	if (!tr.tr_revokes) {
++		bool have_revokes;
++		bool log_in_flight;
++
++		/*
++		 * We have nothing on the ail, but there could be revokes on
++		 * the sdp revoke queue, in which case, we still want to flush
++		 * the log and wait for it to finish.
++		 *
++		 * If the sdp revoke list is empty too, we might still have an
++		 * io outstanding for writing revokes, so we should wait for
++		 * it before returning.
++		 *
++		 * If none of these conditions are true, our revokes are all
++		 * flushed and we can return.
++		 */
++		gfs2_log_lock(sdp);
++		have_revokes = !list_empty(&sdp->sd_log_revokes);
++		log_in_flight = atomic_read(&sdp->sd_log_in_flight);
++		gfs2_log_unlock(sdp);
++		if (have_revokes)
++			goto flush;
++		if (log_in_flight)
++			log_flush_wait(sdp);
+ 		return;
++	}
+ 
+ 	/* A shortened, inline version of gfs2_trans_begin()
+          * tr->alloced is not set since the transaction structure is
+@@ -105,6 +129,7 @@ static void gfs2_ail_empty_gl(struct gfs2_glock *gl)
+ 	__gfs2_ail_flush(gl, 0, tr.tr_revokes);
+ 
+ 	gfs2_trans_end(sdp);
++flush:
+ 	gfs2_log_flush(sdp, NULL, GFS2_LOG_HEAD_FLUSH_NORMAL |
+ 		       GFS2_LFC_AIL_EMPTY_GL);
+ }
+diff --git a/fs/gfs2/log.c b/fs/gfs2/log.c
+index 00a2e721a374..08dd6a430234 100644
+--- a/fs/gfs2/log.c
++++ b/fs/gfs2/log.c
+@@ -512,7 +512,7 @@ static void log_pull_tail(struct gfs2_sbd *sdp, unsigned int new_tail)
+ }
+ 
+ 
+-static void log_flush_wait(struct gfs2_sbd *sdp)
++void log_flush_wait(struct gfs2_sbd *sdp)
+ {
+ 	DEFINE_WAIT(wait);
+ 
+diff --git a/fs/gfs2/log.h b/fs/gfs2/log.h
+index c0a65e5a126b..c1cd6ae17659 100644
+--- a/fs/gfs2/log.h
++++ b/fs/gfs2/log.h
+@@ -73,6 +73,7 @@ extern void gfs2_log_flush(struct gfs2_sbd *sdp, struct gfs2_glock *gl,
+ 			   u32 type);
+ extern void gfs2_log_commit(struct gfs2_sbd *sdp, struct gfs2_trans *trans);
+ extern void gfs2_ail1_flush(struct gfs2_sbd *sdp, struct writeback_control *wbc);
++extern void log_flush_wait(struct gfs2_sbd *sdp);
+ 
+ extern int gfs2_logd(void *data);
+ extern void gfs2_add_revoke(struct gfs2_sbd *sdp, struct gfs2_bufdata *bd);
+diff --git a/fs/hfsplus/attributes.c b/fs/hfsplus/attributes.c
+index e6d554476db4..eeebe80c6be4 100644
+--- a/fs/hfsplus/attributes.c
++++ b/fs/hfsplus/attributes.c
+@@ -292,6 +292,10 @@ static int __hfsplus_delete_attr(struct inode *inode, u32 cnid,
+ 		return -ENOENT;
+ 	}
+ 
++	/* Avoid btree corruption */
++	hfs_bnode_read(fd->bnode, fd->search_key,
++			fd->keyoffset, fd->keylength);
++
+ 	err = hfs_brec_remove(fd);
+ 	if (err)
+ 		return err;
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index bdcffd78fbb9..a46de2cfc28e 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -565,6 +565,7 @@ struct io_kiocb {
+ 	struct list_head	link_list;
+ 	unsigned int		flags;
+ 	refcount_t		refs;
++	unsigned long		fsize;
+ 	u64			user_data;
+ 	u32			result;
+ 	u32			sequence;
+@@ -1242,7 +1243,6 @@ fallback:
+ 	req = io_get_fallback_req(ctx);
+ 	if (req)
+ 		goto got_it;
+-	percpu_ref_put(&ctx->refs);
+ 	return NULL;
+ }
+ 
+@@ -2295,6 +2295,8 @@ static int io_write_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe,
+ 	if (unlikely(!(req->file->f_mode & FMODE_WRITE)))
+ 		return -EBADF;
+ 
++	req->fsize = rlimit(RLIMIT_FSIZE);
++
+ 	/* either don't need iovec imported or already have it */
+ 	if (!req->io || req->flags & REQ_F_NEED_CLEANUP)
+ 		return 0;
+@@ -2367,10 +2369,17 @@ static int io_write(struct io_kiocb *req, struct io_kiocb **nxt,
+ 		}
+ 		kiocb->ki_flags |= IOCB_WRITE;
+ 
++		if (!force_nonblock)
++			current->signal->rlim[RLIMIT_FSIZE].rlim_cur = req->fsize;
++
+ 		if (req->file->f_op->write_iter)
+ 			ret2 = call_write_iter(req->file, kiocb, &iter);
+ 		else
+ 			ret2 = loop_rw_iter(WRITE, req->file, kiocb, &iter);
++
++		if (!force_nonblock)
++			current->signal->rlim[RLIMIT_FSIZE].rlim_cur = RLIM_INFINITY;
++
+ 		/*
+ 		 * Raw bdev writes will -EOPNOTSUPP for IOCB_NOWAIT. Just
+ 		 * retry them without IOCB_NOWAIT.
+@@ -2513,8 +2522,10 @@ static void io_fallocate_finish(struct io_wq_work **workptr)
+ 	if (io_req_cancelled(req))
+ 		return;
+ 
++	current->signal->rlim[RLIMIT_FSIZE].rlim_cur = req->fsize;
+ 	ret = vfs_fallocate(req->file, req->sync.mode, req->sync.off,
+ 				req->sync.len);
++	current->signal->rlim[RLIMIT_FSIZE].rlim_cur = RLIM_INFINITY;
+ 	if (ret < 0)
+ 		req_set_fail_links(req);
+ 	io_cqring_add_event(req, ret);
+@@ -2532,6 +2543,7 @@ static int io_fallocate_prep(struct io_kiocb *req,
+ 	req->sync.off = READ_ONCE(sqe->off);
+ 	req->sync.len = READ_ONCE(sqe->addr);
+ 	req->sync.mode = READ_ONCE(sqe->len);
++	req->fsize = rlimit(RLIMIT_FSIZE);
+ 	return 0;
+ }
+ 
+@@ -2571,6 +2583,8 @@ static int io_openat_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ 	req->open.how.mode = READ_ONCE(sqe->len);
+ 	fname = u64_to_user_ptr(READ_ONCE(sqe->addr));
+ 	req->open.how.flags = READ_ONCE(sqe->open_flags);
++	if (force_o_largefile())
++		req->open.how.flags |= O_LARGEFILE;
+ 
+ 	req->open.filename = getname(fname);
+ 	if (IS_ERR(req->open.filename)) {
+@@ -5424,13 +5438,6 @@ static int __io_sqe_files_scm(struct io_ring_ctx *ctx, int nr, int offset)
+ 	struct sk_buff *skb;
+ 	int i, nr_files;
+ 
+-	if (!capable(CAP_SYS_RESOURCE) && !capable(CAP_SYS_ADMIN)) {
+-		unsigned long inflight = ctx->user->unix_inflight + nr;
+-
+-		if (inflight > task_rlimit(current, RLIMIT_NOFILE))
+-			return -EMFILE;
+-	}
+-
+ 	fpl = kzalloc(sizeof(*fpl), GFP_KERNEL);
+ 	if (!fpl)
+ 		return -ENOMEM;
+diff --git a/fs/nfs/fs_context.c b/fs/nfs/fs_context.c
+index e113fcb4bb4c..1c8d8bedf34e 100644
+--- a/fs/nfs/fs_context.c
++++ b/fs/nfs/fs_context.c
+@@ -190,6 +190,7 @@ static const struct constant_table nfs_vers_tokens[] = {
+ 	{ "4.0",	Opt_vers_4_0 },
+ 	{ "4.1",	Opt_vers_4_1 },
+ 	{ "4.2",	Opt_vers_4_2 },
++	{}
+ };
+ 
+ enum {
+@@ -202,13 +203,14 @@ enum {
+ 	nr__Opt_xprt
+ };
+ 
+-static const struct constant_table nfs_xprt_protocol_tokens[nr__Opt_xprt] = {
++static const struct constant_table nfs_xprt_protocol_tokens[] = {
+ 	{ "rdma",	Opt_xprt_rdma },
+ 	{ "rdma6",	Opt_xprt_rdma6 },
+ 	{ "tcp",	Opt_xprt_tcp },
+ 	{ "tcp6",	Opt_xprt_tcp6 },
+ 	{ "udp",	Opt_xprt_udp },
+ 	{ "udp6",	Opt_xprt_udp6 },
++	{}
+ };
+ 
+ enum {
+@@ -239,6 +241,7 @@ static const struct constant_table nfs_secflavor_tokens[] = {
+ 	{ "spkm3i",	Opt_sec_spkmi },
+ 	{ "spkm3p",	Opt_sec_spkmp },
+ 	{ "sys",	Opt_sec_sys },
++	{}
+ };
+ 
+ /*
+diff --git a/fs/nfs/namespace.c b/fs/nfs/namespace.c
+index f3ece8ed3203..4c943b890995 100644
+--- a/fs/nfs/namespace.c
++++ b/fs/nfs/namespace.c
+@@ -145,6 +145,7 @@ struct vfsmount *nfs_d_automount(struct path *path)
+ 	struct vfsmount *mnt = ERR_PTR(-ENOMEM);
+ 	struct nfs_server *server = NFS_SERVER(d_inode(path->dentry));
+ 	struct nfs_client *client = server->nfs_client;
++	int timeout = READ_ONCE(nfs_mountpoint_expiry_timeout);
+ 	int ret;
+ 
+ 	if (IS_ROOT(path->dentry))
+@@ -190,12 +191,12 @@ struct vfsmount *nfs_d_automount(struct path *path)
+ 	if (IS_ERR(mnt))
+ 		goto out_fc;
+ 
+-	if (nfs_mountpoint_expiry_timeout < 0)
++	mntget(mnt); /* prevent immediate expiration */
++	if (timeout <= 0)
+ 		goto out_fc;
+ 
+-	mntget(mnt); /* prevent immediate expiration */
+ 	mnt_set_expiry(mnt, &nfs_automount_list);
+-	schedule_delayed_work(&nfs_automount_task, nfs_mountpoint_expiry_timeout);
++	schedule_delayed_work(&nfs_automount_task, timeout);
+ 
+ out_fc:
+ 	put_fs_context(fc);
+@@ -233,10 +234,11 @@ const struct inode_operations nfs_referral_inode_operations = {
+ static void nfs_expire_automounts(struct work_struct *work)
+ {
+ 	struct list_head *list = &nfs_automount_list;
++	int timeout = READ_ONCE(nfs_mountpoint_expiry_timeout);
+ 
+ 	mark_mounts_for_expiry(list);
+-	if (!list_empty(list))
+-		schedule_delayed_work(&nfs_automount_task, nfs_mountpoint_expiry_timeout);
++	if (!list_empty(list) && timeout > 0)
++		schedule_delayed_work(&nfs_automount_task, timeout);
+ }
+ 
+ void nfs_release_automount_timer(void)
+diff --git a/fs/nfs/pagelist.c b/fs/nfs/pagelist.c
+index 20b3717cd7ca..8b7c525dbbf7 100644
+--- a/fs/nfs/pagelist.c
++++ b/fs/nfs/pagelist.c
+@@ -1177,38 +1177,38 @@ int nfs_pageio_add_request(struct nfs_pageio_descriptor *desc,
+ 	if (desc->pg_error < 0)
+ 		goto out_failed;
+ 
+-	for (midx = 0; midx < desc->pg_mirror_count; midx++) {
+-		if (midx) {
+-			nfs_page_group_lock(req);
+-
+-			/* find the last request */
+-			for (lastreq = req->wb_head;
+-			     lastreq->wb_this_page != req->wb_head;
+-			     lastreq = lastreq->wb_this_page)
+-				;
+-
+-			dupreq = nfs_create_subreq(req, lastreq,
+-					pgbase, offset, bytes);
+-
+-			nfs_page_group_unlock(req);
+-			if (IS_ERR(dupreq)) {
+-				desc->pg_error = PTR_ERR(dupreq);
+-				goto out_failed;
+-			}
+-		} else
+-			dupreq = req;
++	/* Create the mirror instances first, and fire them off */
++	for (midx = 1; midx < desc->pg_mirror_count; midx++) {
++		nfs_page_group_lock(req);
++
++		/* find the last request */
++		for (lastreq = req->wb_head;
++		     lastreq->wb_this_page != req->wb_head;
++		     lastreq = lastreq->wb_this_page)
++			;
++
++		dupreq = nfs_create_subreq(req, lastreq,
++				pgbase, offset, bytes);
++
++		nfs_page_group_unlock(req);
++		if (IS_ERR(dupreq)) {
++			desc->pg_error = PTR_ERR(dupreq);
++			goto out_failed;
++		}
+ 
+-		if (nfs_pgio_has_mirroring(desc))
+-			desc->pg_mirror_idx = midx;
++		desc->pg_mirror_idx = midx;
+ 		if (!nfs_pageio_add_request_mirror(desc, dupreq))
+ 			goto out_cleanup_subreq;
+ 	}
+ 
++	desc->pg_mirror_idx = 0;
++	if (!nfs_pageio_add_request_mirror(desc, req))
++		goto out_failed;
++
+ 	return 1;
+ 
+ out_cleanup_subreq:
+-	if (req != dupreq)
+-		nfs_pageio_cleanup_request(desc, dupreq);
++	nfs_pageio_cleanup_request(desc, dupreq);
+ out_failed:
+ 	nfs_pageio_error_cleanup(desc);
+ 	return 0;
+diff --git a/fs/nfs/write.c b/fs/nfs/write.c
+index c478b772cc49..38abd130528a 100644
+--- a/fs/nfs/write.c
++++ b/fs/nfs/write.c
+@@ -444,6 +444,7 @@ nfs_destroy_unlinked_subrequests(struct nfs_page *destroy_list,
+ 		}
+ 
+ 		subreq->wb_head = subreq;
++		nfs_release_request(old_head);
+ 
+ 		if (test_and_clear_bit(PG_INODE_REF, &subreq->wb_flags)) {
+ 			nfs_release_request(subreq);
+diff --git a/fs/nfsd/nfsctl.c b/fs/nfsd/nfsctl.c
+index e109a1007704..3bb2db947d29 100644
+--- a/fs/nfsd/nfsctl.c
++++ b/fs/nfsd/nfsctl.c
+@@ -1333,6 +1333,7 @@ void nfsd_client_rmdir(struct dentry *dentry)
+ 	dget(dentry);
+ 	ret = simple_rmdir(dir, dentry);
+ 	WARN_ON_ONCE(ret);
++	fsnotify_rmdir(dir, dentry);
+ 	d_delete(dentry);
+ 	inode_unlock(dir);
+ }
+diff --git a/fs/ocfs2/alloc.c b/fs/ocfs2/alloc.c
+index 88534eb0e7c2..3d5b6b989db2 100644
+--- a/fs/ocfs2/alloc.c
++++ b/fs/ocfs2/alloc.c
+@@ -7403,6 +7403,10 @@ int ocfs2_truncate_inline(struct inode *inode, struct buffer_head *di_bh,
+ 	struct ocfs2_dinode *di = (struct ocfs2_dinode *)di_bh->b_data;
+ 	struct ocfs2_inline_data *idata = &di->id2.i_data;
+ 
++	/* No need to punch hole beyond i_size. */
++	if (start >= i_size_read(inode))
++		return 0;
++
+ 	if (end > i_size_read(inode))
+ 		end = i_size_read(inode);
+ 
+diff --git a/fs/pstore/inode.c b/fs/pstore/inode.c
+index 7fbe8f058220..d99b5d39aa90 100644
+--- a/fs/pstore/inode.c
++++ b/fs/pstore/inode.c
+@@ -87,11 +87,11 @@ static void *pstore_ftrace_seq_next(struct seq_file *s, void *v, loff_t *pos)
+ 	struct pstore_private *ps = s->private;
+ 	struct pstore_ftrace_seq_data *data = v;
+ 
++	(*pos)++;
+ 	data->off += REC_SIZE;
+ 	if (data->off + REC_SIZE > ps->total_size)
+ 		return NULL;
+ 
+-	(*pos)++;
+ 	return data;
+ }
+ 
+@@ -101,6 +101,9 @@ static int pstore_ftrace_seq_show(struct seq_file *s, void *v)
+ 	struct pstore_ftrace_seq_data *data = v;
+ 	struct pstore_ftrace_record *rec;
+ 
++	if (!data)
++		return 0;
++
+ 	rec = (struct pstore_ftrace_record *)(ps->record->buf + data->off);
+ 
+ 	seq_printf(s, "CPU:%d ts:%llu %08lx  %08lx  %ps <- %pS\n",
+diff --git a/fs/pstore/platform.c b/fs/pstore/platform.c
+index d896457e7c11..408277ee3cdb 100644
+--- a/fs/pstore/platform.c
++++ b/fs/pstore/platform.c
+@@ -823,9 +823,9 @@ static int __init pstore_init(void)
+ 
+ 	ret = pstore_init_fs();
+ 	if (ret)
+-		return ret;
++		free_buf_for_compression();
+ 
+-	return 0;
++	return ret;
+ }
+ late_initcall(pstore_init);
+ 
+diff --git a/include/acpi/acpixf.h b/include/acpi/acpixf.h
+index 8e8be989c2a6..a50d6dea44c3 100644
+--- a/include/acpi/acpixf.h
++++ b/include/acpi/acpixf.h
+@@ -752,7 +752,7 @@ ACPI_HW_DEPENDENT_RETURN_UINT32(u32 acpi_dispatch_gpe(acpi_handle gpe_device, u3
+ ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status acpi_disable_all_gpes(void))
+ ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status acpi_enable_all_runtime_gpes(void))
+ ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status acpi_enable_all_wakeup_gpes(void))
+-ACPI_HW_DEPENDENT_RETURN_UINT32(u32 acpi_any_gpe_status_set(void))
++ACPI_HW_DEPENDENT_RETURN_UINT32(u32 acpi_any_gpe_status_set(u32 gpe_skip_number))
+ ACPI_HW_DEPENDENT_RETURN_UINT32(u32 acpi_any_fixed_event_status_set(void))
+ 
+ ACPI_HW_DEPENDENT_RETURN_STATUS(acpi_status
+diff --git a/include/linux/cpu.h b/include/linux/cpu.h
+index 1ca2baf817ed..94cda8c3b5d1 100644
+--- a/include/linux/cpu.h
++++ b/include/linux/cpu.h
+@@ -138,12 +138,18 @@ static inline void get_online_cpus(void) { cpus_read_lock(); }
+ static inline void put_online_cpus(void) { cpus_read_unlock(); }
+ 
+ #ifdef CONFIG_PM_SLEEP_SMP
+-extern int freeze_secondary_cpus(int primary);
++int __freeze_secondary_cpus(int primary, bool suspend);
++static inline int freeze_secondary_cpus(int primary)
++{
++	return __freeze_secondary_cpus(primary, true);
++}
++
+ static inline int disable_nonboot_cpus(void)
+ {
+-	return freeze_secondary_cpus(0);
++	return __freeze_secondary_cpus(0, false);
+ }
+-extern void enable_nonboot_cpus(void);
++
++void enable_nonboot_cpus(void);
+ 
+ static inline int suspend_disable_secondary_cpus(void)
+ {
+diff --git a/include/linux/devfreq_cooling.h b/include/linux/devfreq_cooling.h
+index 4635f95000a4..79a6e37a1d6f 100644
+--- a/include/linux/devfreq_cooling.h
++++ b/include/linux/devfreq_cooling.h
+@@ -75,7 +75,7 @@ void devfreq_cooling_unregister(struct thermal_cooling_device *dfc);
+ 
+ #else /* !CONFIG_DEVFREQ_THERMAL */
+ 
+-struct thermal_cooling_device *
++static inline struct thermal_cooling_device *
+ of_devfreq_cooling_register_power(struct device_node *np, struct devfreq *df,
+ 				  struct devfreq_cooling_power *dfc_power)
+ {
+diff --git a/include/linux/iocontext.h b/include/linux/iocontext.h
+index dba15ca8e60b..1dcd9198beb7 100644
+--- a/include/linux/iocontext.h
++++ b/include/linux/iocontext.h
+@@ -8,6 +8,7 @@
+ 
+ enum {
+ 	ICQ_EXITED		= 1 << 2,
++	ICQ_DESTROYED		= 1 << 3,
+ };
+ 
+ /*
+diff --git a/include/linux/nvme-fc-driver.h b/include/linux/nvme-fc-driver.h
+index 6d0d70f3219c..10f81629b9ce 100644
+--- a/include/linux/nvme-fc-driver.h
++++ b/include/linux/nvme-fc-driver.h
+@@ -270,8 +270,6 @@ struct nvme_fc_remote_port {
+  *
+  * Host/Initiator Transport Entrypoints/Parameters:
+  *
+- * @module:  The LLDD module using the interface
+- *
+  * @localport_delete:  The LLDD initiates deletion of a localport via
+  *       nvme_fc_deregister_localport(). However, the teardown is
+  *       asynchronous. This routine is called upon the completion of the
+@@ -385,8 +383,6 @@ struct nvme_fc_remote_port {
+  *       Value is Mandatory. Allowed to be zero.
+  */
+ struct nvme_fc_port_template {
+-	struct module	*module;
+-
+ 	/* initiator-based functions */
+ 	void	(*localport_delete)(struct nvme_fc_local_port *);
+ 	void	(*remoteport_delete)(struct nvme_fc_remote_port *);
+diff --git a/include/linux/pci-epc.h b/include/linux/pci-epc.h
+index 56f1846b9d39..c8e39607dbb7 100644
+--- a/include/linux/pci-epc.h
++++ b/include/linux/pci-epc.h
+@@ -71,6 +71,7 @@ struct pci_epc_ops {
+  * @bitmap: bitmap to manage the PCI address space
+  * @pages: number of bits representing the address region
+  * @page_size: size of each page
++ * @lock: mutex to protect bitmap
+  */
+ struct pci_epc_mem {
+ 	phys_addr_t	phys_base;
+@@ -78,6 +79,8 @@ struct pci_epc_mem {
+ 	unsigned long	*bitmap;
+ 	size_t		page_size;
+ 	int		pages;
++	/* mutex to protect against concurrent access for memory allocation*/
++	struct mutex	lock;
+ };
+ 
+ /**
+diff --git a/include/linux/sched.h b/include/linux/sched.h
+index 04278493bf15..0323e4f0982a 100644
+--- a/include/linux/sched.h
++++ b/include/linux/sched.h
+@@ -939,8 +939,8 @@ struct task_struct {
+ 	struct seccomp			seccomp;
+ 
+ 	/* Thread group tracking: */
+-	u32				parent_exec_id;
+-	u32				self_exec_id;
++	u64				parent_exec_id;
++	u64				self_exec_id;
+ 
+ 	/* Protection against (de-)allocation: mm, files, fs, tty, keyrings, mems_allowed, mempolicy: */
+ 	spinlock_t			alloc_lock;
+diff --git a/include/linux/xarray.h b/include/linux/xarray.h
+index f73e1775ded0..51bc10d5f6a8 100644
+--- a/include/linux/xarray.h
++++ b/include/linux/xarray.h
+@@ -1648,6 +1648,7 @@ static inline void *xas_next_marked(struct xa_state *xas, unsigned long max,
+ 								xa_mark_t mark)
+ {
+ 	struct xa_node *node = xas->xa_node;
++	void *entry;
+ 	unsigned int offset;
+ 
+ 	if (unlikely(xas_not_node(node) || node->shift))
+@@ -1659,7 +1660,10 @@ static inline void *xas_next_marked(struct xa_state *xas, unsigned long max,
+ 		return NULL;
+ 	if (offset == XA_CHUNK_SIZE)
+ 		return xas_find_marked(xas, max, mark);
+-	return xa_entry(xas->xa, node, offset);
++	entry = xa_entry(xas->xa, node, offset);
++	if (!entry)
++		return xas_find_marked(xas, max, mark);
++	return entry;
+ }
+ 
+ /*
+diff --git a/include/media/rc-map.h b/include/media/rc-map.h
+index f99575a0d29c..d22810dcd85c 100644
+--- a/include/media/rc-map.h
++++ b/include/media/rc-map.h
+@@ -274,6 +274,7 @@ struct rc_map *rc_map_get(const char *name);
+ #define RC_MAP_VIDEOMATE_K100            "rc-videomate-k100"
+ #define RC_MAP_VIDEOMATE_S350            "rc-videomate-s350"
+ #define RC_MAP_VIDEOMATE_TV_PVR          "rc-videomate-tv-pvr"
++#define RC_MAP_KII_PRO                   "rc-videostrong-kii-pro"
+ #define RC_MAP_WETEK_HUB                 "rc-wetek-hub"
+ #define RC_MAP_WETEK_PLAY2               "rc-wetek-play2"
+ #define RC_MAP_WINFAST                   "rc-winfast"
+diff --git a/include/trace/events/rcu.h b/include/trace/events/rcu.h
+index 5e49b06e8104..d56d54c17497 100644
+--- a/include/trace/events/rcu.h
++++ b/include/trace/events/rcu.h
+@@ -712,6 +712,7 @@ TRACE_EVENT_RCU(rcu_torture_read,
+  *	"Begin": rcu_barrier() started.
+  *	"EarlyExit": rcu_barrier() piggybacked, thus early exit.
+  *	"Inc1": rcu_barrier() piggyback check counter incremented.
++ *	"OfflineNoCBQ": rcu_barrier() found offline no-CBs CPU with callbacks.
+  *	"OnlineQ": rcu_barrier() found online CPU with callbacks.
+  *	"OnlineNQ": rcu_barrier() found online CPU, no callbacks.
+  *	"IRQ": An rcu_barrier_callback() callback posted on remote CPU.
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 5080469094af..595b39eee642 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -5590,6 +5590,70 @@ static bool cmp_val_with_extended_s64(s64 sval, struct bpf_reg_state *reg)
+ 		reg->smax_value <= 0 && reg->smin_value >= S32_MIN);
+ }
+ 
++/* Constrain the possible values of @reg with unsigned upper bound @bound.
++ * If @is_exclusive, @bound is an exclusive limit, otherwise it is inclusive.
++ * If @is_jmp32, @bound is a 32-bit value that only constrains the low 32 bits
++ * of @reg.
++ */
++static void set_upper_bound(struct bpf_reg_state *reg, u64 bound, bool is_jmp32,
++			    bool is_exclusive)
++{
++	if (is_exclusive) {
++		/* There are no values for `reg` that make `reg<0` true. */
++		if (bound == 0)
++			return;
++		bound--;
++	}
++	if (is_jmp32) {
++		/* Constrain the register's value in the tnum representation.
++		 * For 64-bit comparisons this happens later in
++		 * __reg_bound_offset(), but for 32-bit comparisons, we can be
++		 * more precise than what can be derived from the updated
++		 * numeric bounds.
++		 */
++		struct tnum t = tnum_range(0, bound);
++
++		t.mask |= ~0xffffffffULL; /* upper half is unknown */
++		reg->var_off = tnum_intersect(reg->var_off, t);
++
++		/* Compute the 64-bit bound from the 32-bit bound. */
++		bound += gen_hi_max(reg->var_off);
++	}
++	reg->umax_value = min(reg->umax_value, bound);
++}
++
++/* Constrain the possible values of @reg with unsigned lower bound @bound.
++ * If @is_exclusive, @bound is an exclusive limit, otherwise it is inclusive.
++ * If @is_jmp32, @bound is a 32-bit value that only constrains the low 32 bits
++ * of @reg.
++ */
++static void set_lower_bound(struct bpf_reg_state *reg, u64 bound, bool is_jmp32,
++			    bool is_exclusive)
++{
++	if (is_exclusive) {
++		/* There are no values for `reg` that make `reg>MAX` true. */
++		if (bound == (is_jmp32 ? U32_MAX : U64_MAX))
++			return;
++		bound++;
++	}
++	if (is_jmp32) {
++		/* Constrain the register's value in the tnum representation.
++		 * For 64-bit comparisons this happens later in
++		 * __reg_bound_offset(), but for 32-bit comparisons, we can be
++		 * more precise than what can be derived from the updated
++		 * numeric bounds.
++		 */
++		struct tnum t = tnum_range(bound, U32_MAX);
++
++		t.mask |= ~0xffffffffULL; /* upper half is unknown */
++		reg->var_off = tnum_intersect(reg->var_off, t);
++
++		/* Compute the 64-bit bound from the 32-bit bound. */
++		bound += gen_hi_min(reg->var_off);
++	}
++	reg->umin_value = max(reg->umin_value, bound);
++}
++
+ /* Adjusts the register min/max values in the case that the dst_reg is the
+  * variable register that we are working on, and src_reg is a constant or we're
+  * simply doing a BPF_K check.
+@@ -5645,15 +5709,8 @@ static void reg_set_min_max(struct bpf_reg_state *true_reg,
+ 	case BPF_JGE:
+ 	case BPF_JGT:
+ 	{
+-		u64 false_umax = opcode == BPF_JGT ? val    : val - 1;
+-		u64 true_umin = opcode == BPF_JGT ? val + 1 : val;
+-
+-		if (is_jmp32) {
+-			false_umax += gen_hi_max(false_reg->var_off);
+-			true_umin += gen_hi_min(true_reg->var_off);
+-		}
+-		false_reg->umax_value = min(false_reg->umax_value, false_umax);
+-		true_reg->umin_value = max(true_reg->umin_value, true_umin);
++		set_upper_bound(false_reg, val, is_jmp32, opcode == BPF_JGE);
++		set_lower_bound(true_reg, val, is_jmp32, opcode == BPF_JGT);
+ 		break;
+ 	}
+ 	case BPF_JSGE:
+@@ -5674,15 +5731,8 @@ static void reg_set_min_max(struct bpf_reg_state *true_reg,
+ 	case BPF_JLE:
+ 	case BPF_JLT:
+ 	{
+-		u64 false_umin = opcode == BPF_JLT ? val    : val + 1;
+-		u64 true_umax = opcode == BPF_JLT ? val - 1 : val;
+-
+-		if (is_jmp32) {
+-			false_umin += gen_hi_min(false_reg->var_off);
+-			true_umax += gen_hi_max(true_reg->var_off);
+-		}
+-		false_reg->umin_value = max(false_reg->umin_value, false_umin);
+-		true_reg->umax_value = min(true_reg->umax_value, true_umax);
++		set_lower_bound(false_reg, val, is_jmp32, opcode == BPF_JLE);
++		set_upper_bound(true_reg, val, is_jmp32, opcode == BPF_JLT);
+ 		break;
+ 	}
+ 	case BPF_JSLE:
+@@ -5757,15 +5807,8 @@ static void reg_set_min_max_inv(struct bpf_reg_state *true_reg,
+ 	case BPF_JGE:
+ 	case BPF_JGT:
+ 	{
+-		u64 false_umin = opcode == BPF_JGT ? val    : val + 1;
+-		u64 true_umax = opcode == BPF_JGT ? val - 1 : val;
+-
+-		if (is_jmp32) {
+-			false_umin += gen_hi_min(false_reg->var_off);
+-			true_umax += gen_hi_max(true_reg->var_off);
+-		}
+-		false_reg->umin_value = max(false_reg->umin_value, false_umin);
+-		true_reg->umax_value = min(true_reg->umax_value, true_umax);
++		set_lower_bound(false_reg, val, is_jmp32, opcode == BPF_JGE);
++		set_upper_bound(true_reg, val, is_jmp32, opcode == BPF_JGT);
+ 		break;
+ 	}
+ 	case BPF_JSGE:
+@@ -5783,15 +5826,8 @@ static void reg_set_min_max_inv(struct bpf_reg_state *true_reg,
+ 	case BPF_JLE:
+ 	case BPF_JLT:
+ 	{
+-		u64 false_umax = opcode == BPF_JLT ? val    : val - 1;
+-		u64 true_umin = opcode == BPF_JLT ? val + 1 : val;
+-
+-		if (is_jmp32) {
+-			false_umax += gen_hi_max(false_reg->var_off);
+-			true_umin += gen_hi_min(true_reg->var_off);
+-		}
+-		false_reg->umax_value = min(false_reg->umax_value, false_umax);
+-		true_reg->umin_value = max(true_reg->umin_value, true_umin);
++		set_upper_bound(false_reg, val, is_jmp32, opcode == BPF_JLE);
++		set_lower_bound(true_reg, val, is_jmp32, opcode == BPF_JLT);
+ 		break;
+ 	}
+ 	case BPF_JSLE:
+diff --git a/kernel/cpu.c b/kernel/cpu.c
+index 9c706af713fb..c8e661ee26d3 100644
+--- a/kernel/cpu.c
++++ b/kernel/cpu.c
+@@ -1212,7 +1212,7 @@ EXPORT_SYMBOL_GPL(cpu_up);
+ #ifdef CONFIG_PM_SLEEP_SMP
+ static cpumask_var_t frozen_cpus;
+ 
+-int freeze_secondary_cpus(int primary)
++int __freeze_secondary_cpus(int primary, bool suspend)
+ {
+ 	int cpu, error = 0;
+ 
+@@ -1237,7 +1237,7 @@ int freeze_secondary_cpus(int primary)
+ 		if (cpu == primary)
+ 			continue;
+ 
+-		if (pm_wakeup_pending()) {
++		if (suspend && pm_wakeup_pending()) {
+ 			pr_info("Wakeup pending. Abort CPU freeze\n");
+ 			error = -EBUSY;
+ 			break;
+diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c
+index 12ff766ec1fa..98e3d873792e 100644
+--- a/kernel/dma/mapping.c
++++ b/kernel/dma/mapping.c
+@@ -154,6 +154,8 @@ EXPORT_SYMBOL(dma_get_sgtable_attrs);
+  */
+ pgprot_t dma_pgprot(struct device *dev, pgprot_t prot, unsigned long attrs)
+ {
++	if (force_dma_unencrypted(dev))
++		prot = pgprot_decrypted(prot);
+ 	if (dev_is_dma_coherent(dev) ||
+ 	    (IS_ENABLED(CONFIG_DMA_NONCOHERENT_CACHE_SYNC) &&
+              (attrs & DMA_ATTR_NON_CONSISTENT)))
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index e453589da97c..243717177f44 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -935,16 +935,10 @@ perf_cgroup_set_shadow_time(struct perf_event *event, u64 now)
+ 	event->shadow_ctx_time = now - t->timestamp;
+ }
+ 
+-/*
+- * Update cpuctx->cgrp so that it is set when first cgroup event is added and
+- * cleared when last cgroup event is removed.
+- */
+ static inline void
+-list_update_cgroup_event(struct perf_event *event,
+-			 struct perf_event_context *ctx, bool add)
++perf_cgroup_event_enable(struct perf_event *event, struct perf_event_context *ctx)
+ {
+ 	struct perf_cpu_context *cpuctx;
+-	struct list_head *cpuctx_entry;
+ 
+ 	if (!is_cgroup_event(event))
+ 		return;
+@@ -961,28 +955,41 @@ list_update_cgroup_event(struct perf_event *event,
+ 	 * because if the first would mismatch, the second would not try again
+ 	 * and we would leave cpuctx->cgrp unset.
+ 	 */
+-	if (add && !cpuctx->cgrp) {
++	if (ctx->is_active && !cpuctx->cgrp) {
+ 		struct perf_cgroup *cgrp = perf_cgroup_from_task(current, ctx);
+ 
+ 		if (cgroup_is_descendant(cgrp->css.cgroup, event->cgrp->css.cgroup))
+ 			cpuctx->cgrp = cgrp;
+ 	}
+ 
+-	if (add && ctx->nr_cgroups++)
++	if (ctx->nr_cgroups++)
+ 		return;
+-	else if (!add && --ctx->nr_cgroups)
++
++	list_add(&cpuctx->cgrp_cpuctx_entry,
++			per_cpu_ptr(&cgrp_cpuctx_list, event->cpu));
++}
++
++static inline void
++perf_cgroup_event_disable(struct perf_event *event, struct perf_event_context *ctx)
++{
++	struct perf_cpu_context *cpuctx;
++
++	if (!is_cgroup_event(event))
+ 		return;
+ 
+-	/* no cgroup running */
+-	if (!add)
++	/*
++	 * Because cgroup events are always per-cpu events,
++	 * @ctx == &cpuctx->ctx.
++	 */
++	cpuctx = container_of(ctx, struct perf_cpu_context, ctx);
++
++	if (--ctx->nr_cgroups)
++		return;
++
++	if (ctx->is_active && cpuctx->cgrp)
+ 		cpuctx->cgrp = NULL;
+ 
+-	cpuctx_entry = &cpuctx->cgrp_cpuctx_entry;
+-	if (add)
+-		list_add(cpuctx_entry,
+-			 per_cpu_ptr(&cgrp_cpuctx_list, event->cpu));
+-	else
+-		list_del(cpuctx_entry);
++	list_del(&cpuctx->cgrp_cpuctx_entry);
+ }
+ 
+ #else /* !CONFIG_CGROUP_PERF */
+@@ -1048,11 +1055,14 @@ static inline u64 perf_cgroup_event_time(struct perf_event *event)
+ }
+ 
+ static inline void
+-list_update_cgroup_event(struct perf_event *event,
+-			 struct perf_event_context *ctx, bool add)
++perf_cgroup_event_enable(struct perf_event *event, struct perf_event_context *ctx)
+ {
+ }
+ 
++static inline void
++perf_cgroup_event_disable(struct perf_event *event, struct perf_event_context *ctx)
++{
++}
+ #endif
+ 
+ /*
+@@ -1682,13 +1692,14 @@ list_add_event(struct perf_event *event, struct perf_event_context *ctx)
+ 		add_event_to_groups(event, ctx);
+ 	}
+ 
+-	list_update_cgroup_event(event, ctx, true);
+-
+ 	list_add_rcu(&event->event_entry, &ctx->event_list);
+ 	ctx->nr_events++;
+ 	if (event->attr.inherit_stat)
+ 		ctx->nr_stat++;
+ 
++	if (event->state > PERF_EVENT_STATE_OFF)
++		perf_cgroup_event_enable(event, ctx);
++
+ 	ctx->generation++;
+ }
+ 
+@@ -1864,8 +1875,6 @@ list_del_event(struct perf_event *event, struct perf_event_context *ctx)
+ 
+ 	event->attach_state &= ~PERF_ATTACH_CONTEXT;
+ 
+-	list_update_cgroup_event(event, ctx, false);
+-
+ 	ctx->nr_events--;
+ 	if (event->attr.inherit_stat)
+ 		ctx->nr_stat--;
+@@ -1882,8 +1891,10 @@ list_del_event(struct perf_event *event, struct perf_event_context *ctx)
+ 	 * of error state is by explicit re-enabling
+ 	 * of the event
+ 	 */
+-	if (event->state > PERF_EVENT_STATE_OFF)
++	if (event->state > PERF_EVENT_STATE_OFF) {
++		perf_cgroup_event_disable(event, ctx);
+ 		perf_event_set_state(event, PERF_EVENT_STATE_OFF);
++	}
+ 
+ 	ctx->generation++;
+ }
+@@ -1986,6 +1997,12 @@ static int perf_get_aux_event(struct perf_event *event,
+ 	return 1;
+ }
+ 
++static inline struct list_head *get_event_list(struct perf_event *event)
++{
++	struct perf_event_context *ctx = event->ctx;
++	return event->attr.pinned ? &ctx->pinned_active : &ctx->flexible_active;
++}
++
+ static void perf_group_detach(struct perf_event *event)
+ {
+ 	struct perf_event *sibling, *tmp;
+@@ -2028,12 +2045,8 @@ static void perf_group_detach(struct perf_event *event)
+ 		if (!RB_EMPTY_NODE(&event->group_node)) {
+ 			add_event_to_groups(sibling, event->ctx);
+ 
+-			if (sibling->state == PERF_EVENT_STATE_ACTIVE) {
+-				struct list_head *list = sibling->attr.pinned ?
+-					&ctx->pinned_active : &ctx->flexible_active;
+-
+-				list_add_tail(&sibling->active_list, list);
+-			}
++			if (sibling->state == PERF_EVENT_STATE_ACTIVE)
++				list_add_tail(&sibling->active_list, get_event_list(sibling));
+ 		}
+ 
+ 		WARN_ON_ONCE(sibling->ctx != event->ctx);
+@@ -2112,6 +2125,7 @@ event_sched_out(struct perf_event *event,
+ 
+ 	if (READ_ONCE(event->pending_disable) >= 0) {
+ 		WRITE_ONCE(event->pending_disable, -1);
++		perf_cgroup_event_disable(event, ctx);
+ 		state = PERF_EVENT_STATE_OFF;
+ 	}
+ 	perf_event_set_state(event, state);
+@@ -2248,6 +2262,7 @@ static void __perf_event_disable(struct perf_event *event,
+ 		event_sched_out(event, cpuctx, ctx);
+ 
+ 	perf_event_set_state(event, PERF_EVENT_STATE_OFF);
++	perf_cgroup_event_disable(event, ctx);
+ }
+ 
+ /*
+@@ -2350,6 +2365,8 @@ event_sched_in(struct perf_event *event,
+ {
+ 	int ret = 0;
+ 
++	WARN_ON_ONCE(event->ctx != ctx);
++
+ 	lockdep_assert_held(&ctx->lock);
+ 
+ 	if (event->state <= PERF_EVENT_STATE_OFF)
+@@ -2629,7 +2646,7 @@ static int  __perf_install_in_context(void *info)
+ 	}
+ 
+ #ifdef CONFIG_CGROUP_PERF
+-	if (is_cgroup_event(event)) {
++	if (event->state > PERF_EVENT_STATE_OFF && is_cgroup_event(event)) {
+ 		/*
+ 		 * If the current cgroup doesn't match the event's
+ 		 * cgroup, we should not try to schedule it.
+@@ -2789,6 +2806,7 @@ static void __perf_event_enable(struct perf_event *event,
+ 		ctx_sched_out(ctx, cpuctx, EVENT_TIME);
+ 
+ 	perf_event_set_state(event, PERF_EVENT_STATE_INACTIVE);
++	perf_cgroup_event_enable(event, ctx);
+ 
+ 	if (!ctx->is_active)
+ 		return;
+@@ -3419,15 +3437,11 @@ static int visit_groups_merge(struct perf_event_groups *groups, int cpu,
+ 	return 0;
+ }
+ 
+-struct sched_in_data {
+-	struct perf_event_context *ctx;
+-	struct perf_cpu_context *cpuctx;
+-	int can_add_hw;
+-};
+-
+-static int pinned_sched_in(struct perf_event *event, void *data)
++static int merge_sched_in(struct perf_event *event, void *data)
+ {
+-	struct sched_in_data *sid = data;
++	struct perf_event_context *ctx = event->ctx;
++	struct perf_cpu_context *cpuctx = __get_cpu_context(ctx);
++	int *can_add_hw = data;
+ 
+ 	if (event->state <= PERF_EVENT_STATE_OFF)
+ 		return 0;
+@@ -3435,39 +3449,19 @@ static int pinned_sched_in(struct perf_event *event, void *data)
+ 	if (!event_filter_match(event))
+ 		return 0;
+ 
+-	if (group_can_go_on(event, sid->cpuctx, sid->can_add_hw)) {
+-		if (!group_sched_in(event, sid->cpuctx, sid->ctx))
+-			list_add_tail(&event->active_list, &sid->ctx->pinned_active);
++	if (group_can_go_on(event, cpuctx, *can_add_hw)) {
++		if (!group_sched_in(event, cpuctx, ctx))
++			list_add_tail(&event->active_list, get_event_list(event));
+ 	}
+ 
+-	/*
+-	 * If this pinned group hasn't been scheduled,
+-	 * put it in error state.
+-	 */
+-	if (event->state == PERF_EVENT_STATE_INACTIVE)
+-		perf_event_set_state(event, PERF_EVENT_STATE_ERROR);
+-
+-	return 0;
+-}
+-
+-static int flexible_sched_in(struct perf_event *event, void *data)
+-{
+-	struct sched_in_data *sid = data;
+-
+-	if (event->state <= PERF_EVENT_STATE_OFF)
+-		return 0;
+-
+-	if (!event_filter_match(event))
+-		return 0;
+-
+-	if (group_can_go_on(event, sid->cpuctx, sid->can_add_hw)) {
+-		int ret = group_sched_in(event, sid->cpuctx, sid->ctx);
+-		if (ret) {
+-			sid->can_add_hw = 0;
+-			sid->ctx->rotate_necessary = 1;
+-			return 0;
++	if (event->state == PERF_EVENT_STATE_INACTIVE) {
++		if (event->attr.pinned) {
++			perf_cgroup_event_disable(event, ctx);
++			perf_event_set_state(event, PERF_EVENT_STATE_ERROR);
+ 		}
+-		list_add_tail(&event->active_list, &sid->ctx->flexible_active);
++
++		*can_add_hw = 0;
++		ctx->rotate_necessary = 1;
+ 	}
+ 
+ 	return 0;
+@@ -3477,30 +3471,22 @@ static void
+ ctx_pinned_sched_in(struct perf_event_context *ctx,
+ 		    struct perf_cpu_context *cpuctx)
+ {
+-	struct sched_in_data sid = {
+-		.ctx = ctx,
+-		.cpuctx = cpuctx,
+-		.can_add_hw = 1,
+-	};
++	int can_add_hw = 1;
+ 
+ 	visit_groups_merge(&ctx->pinned_groups,
+ 			   smp_processor_id(),
+-			   pinned_sched_in, &sid);
++			   merge_sched_in, &can_add_hw);
+ }
+ 
+ static void
+ ctx_flexible_sched_in(struct perf_event_context *ctx,
+ 		      struct perf_cpu_context *cpuctx)
+ {
+-	struct sched_in_data sid = {
+-		.ctx = ctx,
+-		.cpuctx = cpuctx,
+-		.can_add_hw = 1,
+-	};
++	int can_add_hw = 1;
+ 
+ 	visit_groups_merge(&ctx->flexible_groups,
+ 			   smp_processor_id(),
+-			   flexible_sched_in, &sid);
++			   merge_sched_in, &can_add_hw);
+ }
+ 
+ static void
+diff --git a/kernel/irq/debugfs.c b/kernel/irq/debugfs.c
+index a949bd39e343..d44c8fd17609 100644
+--- a/kernel/irq/debugfs.c
++++ b/kernel/irq/debugfs.c
+@@ -206,8 +206,15 @@ static ssize_t irq_debug_write(struct file *file, const char __user *user_buf,
+ 		chip_bus_lock(desc);
+ 		raw_spin_lock_irqsave(&desc->lock, flags);
+ 
+-		if (irq_settings_is_level(desc) || desc->istate & IRQS_NMI) {
+-			/* Can't do level nor NMIs, sorry */
++		/*
++		 * Don't allow injection when the interrupt is:
++		 *  - Level or NMI type
++		 *  - not activated
++		 *  - replaying already
++		 */
++		if (irq_settings_is_level(desc) ||
++		    !irqd_is_activated(&desc->irq_data) ||
++		    (desc->istate & (IRQS_NMI | IRQS_REPLAY))) {
+ 			err = -EINVAL;
+ 		} else {
+ 			desc->istate |= IRQS_PENDING;
+diff --git a/kernel/irq/irqdomain.c b/kernel/irq/irqdomain.c
+index 7527e5ef6fe5..64507c663563 100644
+--- a/kernel/irq/irqdomain.c
++++ b/kernel/irq/irqdomain.c
+@@ -1310,6 +1310,11 @@ int irq_domain_alloc_irqs_hierarchy(struct irq_domain *domain,
+ 				    unsigned int irq_base,
+ 				    unsigned int nr_irqs, void *arg)
+ {
++	if (!domain->ops->alloc) {
++		pr_debug("domain->ops->alloc() is NULL\n");
++		return -ENOSYS;
++	}
++
+ 	return domain->ops->alloc(domain, irq_base, nr_irqs, arg);
+ }
+ 
+@@ -1347,11 +1352,6 @@ int __irq_domain_alloc_irqs(struct irq_domain *domain, int irq_base,
+ 			return -EINVAL;
+ 	}
+ 
+-	if (!domain->ops->alloc) {
+-		pr_debug("domain->ops->alloc() is NULL\n");
+-		return -ENOSYS;
+-	}
+-
+ 	if (realloc && irq_base >= 0) {
+ 		virq = irq_base;
+ 	} else {
+diff --git a/kernel/kmod.c b/kernel/kmod.c
+index bc6addd9152b..a2de58de6ab6 100644
+--- a/kernel/kmod.c
++++ b/kernel/kmod.c
+@@ -120,7 +120,7 @@ out:
+  * invoke it.
+  *
+  * If module auto-loading support is disabled then this function
+- * becomes a no-operation.
++ * simply returns -ENOENT.
+  */
+ int __request_module(bool wait, const char *fmt, ...)
+ {
+@@ -137,7 +137,7 @@ int __request_module(bool wait, const char *fmt, ...)
+ 	WARN_ON_ONCE(wait && current_is_async());
+ 
+ 	if (!modprobe_path[0])
+-		return 0;
++		return -ENOENT;
+ 
+ 	va_start(args, fmt);
+ 	ret = vsnprintf(module_name, MODULE_NAME_LEN, fmt, args);
+diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
+index 32406ef0d6a2..5142a6b11bf5 100644
+--- a/kernel/locking/lockdep.c
++++ b/kernel/locking/lockdep.c
+@@ -1719,9 +1719,11 @@ unsigned long lockdep_count_forward_deps(struct lock_class *class)
+ 	this.class = class;
+ 
+ 	raw_local_irq_save(flags);
++	current->lockdep_recursion = 1;
+ 	arch_spin_lock(&lockdep_lock);
+ 	ret = __lockdep_count_forward_deps(&this);
+ 	arch_spin_unlock(&lockdep_lock);
++	current->lockdep_recursion = 0;
+ 	raw_local_irq_restore(flags);
+ 
+ 	return ret;
+@@ -1746,9 +1748,11 @@ unsigned long lockdep_count_backward_deps(struct lock_class *class)
+ 	this.class = class;
+ 
+ 	raw_local_irq_save(flags);
++	current->lockdep_recursion = 1;
+ 	arch_spin_lock(&lockdep_lock);
+ 	ret = __lockdep_count_backward_deps(&this);
+ 	arch_spin_unlock(&lockdep_lock);
++	current->lockdep_recursion = 0;
+ 	raw_local_irq_restore(flags);
+ 
+ 	return ret;
+diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
+index d91c9156fab2..c0a9865b1f6a 100644
+--- a/kernel/rcu/tree.c
++++ b/kernel/rcu/tree.c
+@@ -3090,9 +3090,10 @@ static void rcu_barrier_callback(struct rcu_head *rhp)
+ /*
+  * Called with preemption disabled, and from cross-cpu IRQ context.
+  */
+-static void rcu_barrier_func(void *unused)
++static void rcu_barrier_func(void *cpu_in)
+ {
+-	struct rcu_data *rdp = raw_cpu_ptr(&rcu_data);
++	uintptr_t cpu = (uintptr_t)cpu_in;
++	struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu);
+ 
+ 	rcu_barrier_trace(TPS("IRQ"), -1, rcu_state.barrier_sequence);
+ 	rdp->barrier_head.func = rcu_barrier_callback;
+@@ -3119,7 +3120,7 @@ static void rcu_barrier_func(void *unused)
+  */
+ void rcu_barrier(void)
+ {
+-	int cpu;
++	uintptr_t cpu;
+ 	struct rcu_data *rdp;
+ 	unsigned long s = rcu_seq_snap(&rcu_state.barrier_sequence);
+ 
+@@ -3142,13 +3143,14 @@ void rcu_barrier(void)
+ 	rcu_barrier_trace(TPS("Inc1"), -1, rcu_state.barrier_sequence);
+ 
+ 	/*
+-	 * Initialize the count to one rather than to zero in order to
+-	 * avoid a too-soon return to zero in case of a short grace period
+-	 * (or preemption of this task).  Exclude CPU-hotplug operations
+-	 * to ensure that no offline CPU has callbacks queued.
++	 * Initialize the count to two rather than to zero in order
++	 * to avoid a too-soon return to zero in case of an immediate
++	 * invocation of the just-enqueued callback (or preemption of
++	 * this task).  Exclude CPU-hotplug operations to ensure that no
++	 * offline non-offloaded CPU has callbacks queued.
+ 	 */
+ 	init_completion(&rcu_state.barrier_completion);
+-	atomic_set(&rcu_state.barrier_cpu_count, 1);
++	atomic_set(&rcu_state.barrier_cpu_count, 2);
+ 	get_online_cpus();
+ 
+ 	/*
+@@ -3158,13 +3160,23 @@ void rcu_barrier(void)
+ 	 */
+ 	for_each_possible_cpu(cpu) {
+ 		rdp = per_cpu_ptr(&rcu_data, cpu);
+-		if (!cpu_online(cpu) &&
++		if (cpu_is_offline(cpu) &&
+ 		    !rcu_segcblist_is_offloaded(&rdp->cblist))
+ 			continue;
+-		if (rcu_segcblist_n_cbs(&rdp->cblist)) {
++		if (rcu_segcblist_n_cbs(&rdp->cblist) && cpu_online(cpu)) {
+ 			rcu_barrier_trace(TPS("OnlineQ"), cpu,
+ 					  rcu_state.barrier_sequence);
+-			smp_call_function_single(cpu, rcu_barrier_func, NULL, 1);
++			smp_call_function_single(cpu, rcu_barrier_func, (void *)cpu, 1);
++		} else if (rcu_segcblist_n_cbs(&rdp->cblist) &&
++			   cpu_is_offline(cpu)) {
++			rcu_barrier_trace(TPS("OfflineNoCBQ"), cpu,
++					  rcu_state.barrier_sequence);
++			local_irq_disable();
++			rcu_barrier_func((void *)cpu);
++			local_irq_enable();
++		} else if (cpu_is_offline(cpu)) {
++			rcu_barrier_trace(TPS("OfflineNoCBNoQ"), cpu,
++					  rcu_state.barrier_sequence);
+ 		} else {
+ 			rcu_barrier_trace(TPS("OnlineNQ"), cpu,
+ 					  rcu_state.barrier_sequence);
+@@ -3176,7 +3188,7 @@ void rcu_barrier(void)
+ 	 * Now that we have an rcu_barrier_callback() callback on each
+ 	 * CPU, and thus each counted, remove the initial count.
+ 	 */
+-	if (atomic_dec_and_test(&rcu_state.barrier_cpu_count))
++	if (atomic_sub_and_test(2, &rcu_state.barrier_cpu_count))
+ 		complete(&rcu_state.barrier_completion);
+ 
+ 	/* Wait for all rcu_barrier_callback() callbacks to be invoked. */
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index 1a9983da4408..da8a19470218 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -3671,7 +3671,6 @@ static void sched_tick_remote(struct work_struct *work)
+ 	if (cpu_is_offline(cpu))
+ 		goto out_unlock;
+ 
+-	curr = rq->curr;
+ 	update_rq_clock(rq);
+ 
+ 	if (!is_idle_task(curr)) {
+diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
+index cff3e656566d..dac9104d126f 100644
+--- a/kernel/sched/cputime.c
++++ b/kernel/sched/cputime.c
+@@ -909,8 +909,10 @@ void task_cputime(struct task_struct *t, u64 *utime, u64 *stime)
+ 	} while (read_seqcount_retry(&vtime->seqcount, seq));
+ }
+ 
+-static int vtime_state_check(struct vtime *vtime, int cpu)
++static int vtime_state_fetch(struct vtime *vtime, int cpu)
+ {
++	int state = READ_ONCE(vtime->state);
++
+ 	/*
+ 	 * We raced against a context switch, fetch the
+ 	 * kcpustat task again.
+@@ -927,10 +929,10 @@ static int vtime_state_check(struct vtime *vtime, int cpu)
+ 	 *
+ 	 * Case 1) is ok but 2) is not. So wait for a safe VTIME state.
+ 	 */
+-	if (vtime->state == VTIME_INACTIVE)
++	if (state == VTIME_INACTIVE)
+ 		return -EAGAIN;
+ 
+-	return 0;
++	return state;
+ }
+ 
+ static u64 kcpustat_user_vtime(struct vtime *vtime)
+@@ -949,14 +951,15 @@ static int kcpustat_field_vtime(u64 *cpustat,
+ {
+ 	struct vtime *vtime = &tsk->vtime;
+ 	unsigned int seq;
+-	int err;
+ 
+ 	do {
++		int state;
++
+ 		seq = read_seqcount_begin(&vtime->seqcount);
+ 
+-		err = vtime_state_check(vtime, cpu);
+-		if (err < 0)
+-			return err;
++		state = vtime_state_fetch(vtime, cpu);
++		if (state < 0)
++			return state;
+ 
+ 		*val = cpustat[usage];
+ 
+@@ -969,7 +972,7 @@ static int kcpustat_field_vtime(u64 *cpustat,
+ 		 */
+ 		switch (usage) {
+ 		case CPUTIME_SYSTEM:
+-			if (vtime->state == VTIME_SYS)
++			if (state == VTIME_SYS)
+ 				*val += vtime->stime + vtime_delta(vtime);
+ 			break;
+ 		case CPUTIME_USER:
+@@ -981,11 +984,11 @@ static int kcpustat_field_vtime(u64 *cpustat,
+ 				*val += kcpustat_user_vtime(vtime);
+ 			break;
+ 		case CPUTIME_GUEST:
+-			if (vtime->state == VTIME_GUEST && task_nice(tsk) <= 0)
++			if (state == VTIME_GUEST && task_nice(tsk) <= 0)
+ 				*val += vtime->gtime + vtime_delta(vtime);
+ 			break;
+ 		case CPUTIME_GUEST_NICE:
+-			if (vtime->state == VTIME_GUEST && task_nice(tsk) > 0)
++			if (state == VTIME_GUEST && task_nice(tsk) > 0)
+ 				*val += vtime->gtime + vtime_delta(vtime);
+ 			break;
+ 		default:
+@@ -1036,23 +1039,23 @@ static int kcpustat_cpu_fetch_vtime(struct kernel_cpustat *dst,
+ {
+ 	struct vtime *vtime = &tsk->vtime;
+ 	unsigned int seq;
+-	int err;
+ 
+ 	do {
+ 		u64 *cpustat;
+ 		u64 delta;
++		int state;
+ 
+ 		seq = read_seqcount_begin(&vtime->seqcount);
+ 
+-		err = vtime_state_check(vtime, cpu);
+-		if (err < 0)
+-			return err;
++		state = vtime_state_fetch(vtime, cpu);
++		if (state < 0)
++			return state;
+ 
+ 		*dst = *src;
+ 		cpustat = dst->cpustat;
+ 
+ 		/* Task is sleeping, dead or idle, nothing to add */
+-		if (vtime->state < VTIME_SYS)
++		if (state < VTIME_SYS)
+ 			continue;
+ 
+ 		delta = vtime_delta(vtime);
+@@ -1061,15 +1064,15 @@ static int kcpustat_cpu_fetch_vtime(struct kernel_cpustat *dst,
+ 		 * Task runs either in user (including guest) or kernel space,
+ 		 * add pending nohz time to the right place.
+ 		 */
+-		if (vtime->state == VTIME_SYS) {
++		if (state == VTIME_SYS) {
+ 			cpustat[CPUTIME_SYSTEM] += vtime->stime + delta;
+-		} else if (vtime->state == VTIME_USER) {
++		} else if (state == VTIME_USER) {
+ 			if (task_nice(tsk) > 0)
+ 				cpustat[CPUTIME_NICE] += vtime->utime + delta;
+ 			else
+ 				cpustat[CPUTIME_USER] += vtime->utime + delta;
+ 		} else {
+-			WARN_ON_ONCE(vtime->state != VTIME_GUEST);
++			WARN_ON_ONCE(state != VTIME_GUEST);
+ 			if (task_nice(tsk) > 0) {
+ 				cpustat[CPUTIME_GUEST_NICE] += vtime->gtime + delta;
+ 				cpustat[CPUTIME_NICE] += vtime->gtime + delta;
+@@ -1080,7 +1083,7 @@ static int kcpustat_cpu_fetch_vtime(struct kernel_cpustat *dst,
+ 		}
+ 	} while (read_seqcount_retry(&vtime->seqcount, seq));
+ 
+-	return err;
++	return 0;
+ }
+ 
+ void kcpustat_cpu_fetch(struct kernel_cpustat *dst, int cpu)
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index c1217bfe5e81..c76a20648b72 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -3957,6 +3957,7 @@ static inline void check_schedstat_required(void)
+ #endif
+ }
+ 
++static inline bool cfs_bandwidth_used(void);
+ 
+ /*
+  * MIGRATION
+@@ -4035,10 +4036,16 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
+ 		__enqueue_entity(cfs_rq, se);
+ 	se->on_rq = 1;
+ 
+-	if (cfs_rq->nr_running == 1) {
++	/*
++	 * When bandwidth control is enabled, cfs might have been removed
++	 * because of a parent been throttled but cfs->nr_running > 1. Try to
++	 * add it unconditionnally.
++	 */
++	if (cfs_rq->nr_running == 1 || cfs_bandwidth_used())
+ 		list_add_leaf_cfs_rq(cfs_rq);
++
++	if (cfs_rq->nr_running == 1)
+ 		check_enqueue_throttle(cfs_rq);
+-	}
+ }
+ 
+ static void __clear_buddies_last(struct sched_entity *se)
+@@ -4619,11 +4626,22 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq)
+ 			break;
+ 	}
+ 
+-	assert_list_leaf_cfs_rq(rq);
+-
+ 	if (!se)
+ 		add_nr_running(rq, task_delta);
+ 
++	/*
++	 * The cfs_rq_throttled() breaks in the above iteration can result in
++	 * incomplete leaf list maintenance, resulting in triggering the
++	 * assertion below.
++	 */
++	for_each_sched_entity(se) {
++		cfs_rq = cfs_rq_of(se);
++
++		list_add_leaf_cfs_rq(cfs_rq);
++	}
++
++	assert_list_leaf_cfs_rq(rq);
++
+ 	/* Determine whether we need to wake up potentially idle CPU: */
+ 	if (rq->curr == rq->idle && rq->cfs.nr_running)
+ 		resched_curr(rq);
+@@ -8345,7 +8363,8 @@ static inline void update_sg_wakeup_stats(struct sched_domain *sd,
+ 	 * Computing avg_load makes sense only when group is fully busy or
+ 	 * overloaded
+ 	 */
+-	if (sgs->group_type < group_fully_busy)
++	if (sgs->group_type == group_fully_busy ||
++		sgs->group_type == group_overloaded)
+ 		sgs->avg_load = (sgs->group_load * SCHED_CAPACITY_SCALE) /
+ 				sgs->group_capacity;
+ }
+diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
+index 9ea647835fd6..b056149c228b 100644
+--- a/kernel/sched/sched.h
++++ b/kernel/sched/sched.h
+@@ -118,7 +118,13 @@ extern long calc_load_fold_active(struct rq *this_rq, long adjust);
+ #ifdef CONFIG_64BIT
+ # define NICE_0_LOAD_SHIFT	(SCHED_FIXEDPOINT_SHIFT + SCHED_FIXEDPOINT_SHIFT)
+ # define scale_load(w)		((w) << SCHED_FIXEDPOINT_SHIFT)
+-# define scale_load_down(w)	((w) >> SCHED_FIXEDPOINT_SHIFT)
++# define scale_load_down(w) \
++({ \
++	unsigned long __w = (w); \
++	if (__w) \
++		__w = max(2UL, __w >> SCHED_FIXEDPOINT_SHIFT); \
++	__w; \
++})
+ #else
+ # define NICE_0_LOAD_SHIFT	(SCHED_FIXEDPOINT_SHIFT)
+ # define scale_load(w)		(w)
+diff --git a/kernel/seccomp.c b/kernel/seccomp.c
+index b6ea3dcb57bf..683c81e4861e 100644
+--- a/kernel/seccomp.c
++++ b/kernel/seccomp.c
+@@ -1221,6 +1221,7 @@ static const struct file_operations seccomp_notify_ops = {
+ 	.poll = seccomp_notify_poll,
+ 	.release = seccomp_notify_release,
+ 	.unlocked_ioctl = seccomp_notify_ioctl,
++	.compat_ioctl = seccomp_notify_ioctl,
+ };
+ 
+ static struct file *init_listener(struct seccomp_filter *filter)
+diff --git a/kernel/signal.c b/kernel/signal.c
+index 5b2396350dd1..e58a6c619824 100644
+--- a/kernel/signal.c
++++ b/kernel/signal.c
+@@ -1931,7 +1931,7 @@ bool do_notify_parent(struct task_struct *tsk, int sig)
+ 		 * This is only possible if parent == real_parent.
+ 		 * Check if it has changed security domain.
+ 		 */
+-		if (tsk->parent_exec_id != tsk->parent->self_exec_id)
++		if (tsk->parent_exec_id != READ_ONCE(tsk->parent->self_exec_id))
+ 			sig = SIGCHLD;
+ 	}
+ 
+diff --git a/kernel/time/namespace.c b/kernel/time/namespace.c
+index 12858507d75a..6477c6d0e1a6 100644
+--- a/kernel/time/namespace.c
++++ b/kernel/time/namespace.c
+@@ -446,6 +446,7 @@ const struct proc_ns_operations timens_operations = {
+ 
+ const struct proc_ns_operations timens_for_children_operations = {
+ 	.name		= "time_for_children",
++	.real_ns_name	= "time",
+ 	.type		= CLONE_NEWTIME,
+ 	.get		= timens_for_children_get,
+ 	.put		= timens_put,
+diff --git a/kernel/time/sched_clock.c b/kernel/time/sched_clock.c
+index e4332e3e2d56..fa3f800d7d76 100644
+--- a/kernel/time/sched_clock.c
++++ b/kernel/time/sched_clock.c
+@@ -208,7 +208,8 @@ sched_clock_register(u64 (*read)(void), int bits, unsigned long rate)
+ 
+ 	if (sched_clock_timer.function != NULL) {
+ 		/* update timeout for clock wrap */
+-		hrtimer_start(&sched_clock_timer, cd.wrap_kt, HRTIMER_MODE_REL);
++		hrtimer_start(&sched_clock_timer, cd.wrap_kt,
++			      HRTIMER_MODE_REL_HARD);
+ 	}
+ 
+ 	r = rate;
+@@ -254,9 +255,9 @@ void __init generic_sched_clock_init(void)
+ 	 * Start the timer to keep sched_clock() properly updated and
+ 	 * sets the initial epoch.
+ 	 */
+-	hrtimer_init(&sched_clock_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL);
++	hrtimer_init(&sched_clock_timer, CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD);
+ 	sched_clock_timer.function = sched_clock_poll;
+-	hrtimer_start(&sched_clock_timer, cd.wrap_kt, HRTIMER_MODE_REL);
++	hrtimer_start(&sched_clock_timer, cd.wrap_kt, HRTIMER_MODE_REL_HARD);
+ }
+ 
+ /*
+@@ -293,7 +294,7 @@ void sched_clock_resume(void)
+ 	struct clock_read_data *rd = &cd.read_data[0];
+ 
+ 	rd->epoch_cyc = cd.actual_read_sched_clock();
+-	hrtimer_start(&sched_clock_timer, cd.wrap_kt, HRTIMER_MODE_REL);
++	hrtimer_start(&sched_clock_timer, cd.wrap_kt, HRTIMER_MODE_REL_HARD);
+ 	rd->read_sched_clock = cd.actual_read_sched_clock;
+ }
+ 
+diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
+index 362cca52f5de..d0568af4a0ef 100644
+--- a/kernel/trace/trace_kprobe.c
++++ b/kernel/trace/trace_kprobe.c
+@@ -1078,6 +1078,8 @@ static int trace_kprobe_show(struct seq_file *m, struct dyn_event *ev)
+ 	int i;
+ 
+ 	seq_putc(m, trace_kprobe_is_return(tk) ? 'r' : 'p');
++	if (trace_kprobe_is_return(tk) && tk->rp.maxactive)
++		seq_printf(m, "%d", tk->rp.maxactive);
+ 	seq_printf(m, ":%s/%s", trace_probe_group_name(&tk->tp),
+ 				trace_probe_name(&tk->tp));
+ 
+diff --git a/kernel/ucount.c b/kernel/ucount.c
+index a53cc2b4179c..29c60eb4ec9b 100644
+--- a/kernel/ucount.c
++++ b/kernel/ucount.c
+@@ -69,6 +69,7 @@ static struct ctl_table user_table[] = {
+ 	UCOUNT_ENTRY("max_net_namespaces"),
+ 	UCOUNT_ENTRY("max_mnt_namespaces"),
+ 	UCOUNT_ENTRY("max_cgroup_namespaces"),
++	UCOUNT_ENTRY("max_time_namespaces"),
+ #ifdef CONFIG_INOTIFY_USER
+ 	UCOUNT_ENTRY("max_inotify_instances"),
+ 	UCOUNT_ENTRY("max_inotify_watches"),
+diff --git a/lib/test_xarray.c b/lib/test_xarray.c
+index 8c7d7a8468b8..d4f97925dbd8 100644
+--- a/lib/test_xarray.c
++++ b/lib/test_xarray.c
+@@ -1156,6 +1156,42 @@ static noinline void check_find_entry(struct xarray *xa)
+ 	XA_BUG_ON(xa, !xa_empty(xa));
+ }
+ 
++static noinline void check_pause(struct xarray *xa)
++{
++	XA_STATE(xas, xa, 0);
++	void *entry;
++	unsigned int order;
++	unsigned long index = 1;
++	unsigned int count = 0;
++
++	for (order = 0; order < order_limit; order++) {
++		XA_BUG_ON(xa, xa_store_order(xa, index, order,
++					xa_mk_index(index), GFP_KERNEL));
++		index += 1UL << order;
++	}
++
++	rcu_read_lock();
++	xas_for_each(&xas, entry, ULONG_MAX) {
++		XA_BUG_ON(xa, entry != xa_mk_index(1UL << count));
++		count++;
++	}
++	rcu_read_unlock();
++	XA_BUG_ON(xa, count != order_limit);
++
++	count = 0;
++	xas_set(&xas, 0);
++	rcu_read_lock();
++	xas_for_each(&xas, entry, ULONG_MAX) {
++		XA_BUG_ON(xa, entry != xa_mk_index(1UL << count));
++		count++;
++		xas_pause(&xas);
++	}
++	rcu_read_unlock();
++	XA_BUG_ON(xa, count != order_limit);
++
++	xa_destroy(xa);
++}
++
+ static noinline void check_move_tiny(struct xarray *xa)
+ {
+ 	XA_STATE(xas, xa, 0);
+@@ -1664,6 +1700,7 @@ static int xarray_checks(void)
+ 	check_xa_alloc();
+ 	check_find(&array);
+ 	check_find_entry(&array);
++	check_pause(&array);
+ 	check_account(&array);
+ 	check_destroy(&array);
+ 	check_move(&array);
+diff --git a/lib/xarray.c b/lib/xarray.c
+index acd1fad2e862..08d71c7b7599 100644
+--- a/lib/xarray.c
++++ b/lib/xarray.c
+@@ -970,7 +970,7 @@ void xas_pause(struct xa_state *xas)
+ 
+ 	xas->xa_node = XAS_RESTART;
+ 	if (node) {
+-		unsigned int offset = xas->xa_offset;
++		unsigned long offset = xas->xa_offset;
+ 		while (++offset < XA_CHUNK_SIZE) {
+ 			if (!xa_is_sibling(xa_entry(xas->xa, node, offset)))
+ 				break;
+@@ -1208,6 +1208,8 @@ void *xas_find_marked(struct xa_state *xas, unsigned long max, xa_mark_t mark)
+ 		}
+ 
+ 		entry = xa_entry(xas->xa, xas->xa_node, xas->xa_offset);
++		if (!entry && !(xa_track_free(xas->xa) && mark == XA_FREE_MARK))
++			continue;
+ 		if (!xa_is_node(entry))
+ 			return entry;
+ 		xas->xa_node = xa_to_node(entry);
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index 7ddf91c4295f..615d73acd0da 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -2324,6 +2324,9 @@ static unsigned long calculate_high_delay(struct mem_cgroup *memcg,
+ 		usage = page_counter_read(&memcg->memory);
+ 		high = READ_ONCE(memcg->high);
+ 
++		if (usage <= high)
++			continue;
++
+ 		/*
+ 		 * Prevent division by 0 in overage calculation by acting as if
+ 		 * it was a threshold of 1 page
+diff --git a/security/keys/key.c b/security/keys/key.c
+index 718bf7217420..e959b3c96b48 100644
+--- a/security/keys/key.c
++++ b/security/keys/key.c
+@@ -382,7 +382,7 @@ int key_payload_reserve(struct key *key, size_t datalen)
+ 		spin_lock(&key->user->lock);
+ 
+ 		if (delta > 0 &&
+-		    (key->user->qnbytes + delta >= maxbytes ||
++		    (key->user->qnbytes + delta > maxbytes ||
+ 		     key->user->qnbytes + delta < key->user->qnbytes)) {
+ 			ret = -EDQUOT;
+ 		}
+diff --git a/security/keys/keyctl.c b/security/keys/keyctl.c
+index 9b898c969558..d1a3dea58dee 100644
+--- a/security/keys/keyctl.c
++++ b/security/keys/keyctl.c
+@@ -937,8 +937,8 @@ long keyctl_chown_key(key_serial_t id, uid_t user, gid_t group)
+ 				key_quota_root_maxbytes : key_quota_maxbytes;
+ 
+ 			spin_lock(&newowner->lock);
+-			if (newowner->qnkeys + 1 >= maxkeys ||
+-			    newowner->qnbytes + key->quotalen >= maxbytes ||
++			if (newowner->qnkeys + 1 > maxkeys ||
++			    newowner->qnbytes + key->quotalen > maxbytes ||
+ 			    newowner->qnbytes + key->quotalen <
+ 			    newowner->qnbytes)
+ 				goto quota_overrun;
+diff --git a/sound/core/oss/pcm_plugin.c b/sound/core/oss/pcm_plugin.c
+index 752d078908e9..50c35ecc8953 100644
+--- a/sound/core/oss/pcm_plugin.c
++++ b/sound/core/oss/pcm_plugin.c
+@@ -196,7 +196,9 @@ int snd_pcm_plugin_free(struct snd_pcm_plugin *plugin)
+ 	return 0;
+ }
+ 
+-snd_pcm_sframes_t snd_pcm_plug_client_size(struct snd_pcm_substream *plug, snd_pcm_uframes_t drv_frames)
++static snd_pcm_sframes_t plug_client_size(struct snd_pcm_substream *plug,
++					  snd_pcm_uframes_t drv_frames,
++					  bool check_size)
+ {
+ 	struct snd_pcm_plugin *plugin, *plugin_prev, *plugin_next;
+ 	int stream;
+@@ -209,7 +211,7 @@ snd_pcm_sframes_t snd_pcm_plug_client_size(struct snd_pcm_substream *plug, snd_p
+ 	if (stream == SNDRV_PCM_STREAM_PLAYBACK) {
+ 		plugin = snd_pcm_plug_last(plug);
+ 		while (plugin && drv_frames > 0) {
+-			if (drv_frames > plugin->buf_frames)
++			if (check_size && drv_frames > plugin->buf_frames)
+ 				drv_frames = plugin->buf_frames;
+ 			plugin_prev = plugin->prev;
+ 			if (plugin->src_frames)
+@@ -222,7 +224,7 @@ snd_pcm_sframes_t snd_pcm_plug_client_size(struct snd_pcm_substream *plug, snd_p
+ 			plugin_next = plugin->next;
+ 			if (plugin->dst_frames)
+ 				drv_frames = plugin->dst_frames(plugin, drv_frames);
+-			if (drv_frames > plugin->buf_frames)
++			if (check_size && drv_frames > plugin->buf_frames)
+ 				drv_frames = plugin->buf_frames;
+ 			plugin = plugin_next;
+ 		}
+@@ -231,7 +233,9 @@ snd_pcm_sframes_t snd_pcm_plug_client_size(struct snd_pcm_substream *plug, snd_p
+ 	return drv_frames;
+ }
+ 
+-snd_pcm_sframes_t snd_pcm_plug_slave_size(struct snd_pcm_substream *plug, snd_pcm_uframes_t clt_frames)
++static snd_pcm_sframes_t plug_slave_size(struct snd_pcm_substream *plug,
++					 snd_pcm_uframes_t clt_frames,
++					 bool check_size)
+ {
+ 	struct snd_pcm_plugin *plugin, *plugin_prev, *plugin_next;
+ 	snd_pcm_sframes_t frames;
+@@ -252,14 +256,14 @@ snd_pcm_sframes_t snd_pcm_plug_slave_size(struct snd_pcm_substream *plug, snd_pc
+ 				if (frames < 0)
+ 					return frames;
+ 			}
+-			if (frames > plugin->buf_frames)
++			if (check_size && frames > plugin->buf_frames)
+ 				frames = plugin->buf_frames;
+ 			plugin = plugin_next;
+ 		}
+ 	} else if (stream == SNDRV_PCM_STREAM_CAPTURE) {
+ 		plugin = snd_pcm_plug_last(plug);
+ 		while (plugin) {
+-			if (frames > plugin->buf_frames)
++			if (check_size && frames > plugin->buf_frames)
+ 				frames = plugin->buf_frames;
+ 			plugin_prev = plugin->prev;
+ 			if (plugin->src_frames) {
+@@ -274,6 +278,18 @@ snd_pcm_sframes_t snd_pcm_plug_slave_size(struct snd_pcm_substream *plug, snd_pc
+ 	return frames;
+ }
+ 
++snd_pcm_sframes_t snd_pcm_plug_client_size(struct snd_pcm_substream *plug,
++					   snd_pcm_uframes_t drv_frames)
++{
++	return plug_client_size(plug, drv_frames, false);
++}
++
++snd_pcm_sframes_t snd_pcm_plug_slave_size(struct snd_pcm_substream *plug,
++					  snd_pcm_uframes_t clt_frames)
++{
++	return plug_slave_size(plug, clt_frames, false);
++}
++
+ static int snd_pcm_plug_formats(const struct snd_mask *mask,
+ 				snd_pcm_format_t format)
+ {
+@@ -630,7 +646,7 @@ snd_pcm_sframes_t snd_pcm_plug_write_transfer(struct snd_pcm_substream *plug, st
+ 		src_channels = dst_channels;
+ 		plugin = next;
+ 	}
+-	return snd_pcm_plug_client_size(plug, frames);
++	return plug_client_size(plug, frames, true);
+ }
+ 
+ snd_pcm_sframes_t snd_pcm_plug_read_transfer(struct snd_pcm_substream *plug, struct snd_pcm_plugin_channel *dst_channels_final, snd_pcm_uframes_t size)
+@@ -640,7 +656,7 @@ snd_pcm_sframes_t snd_pcm_plug_read_transfer(struct snd_pcm_substream *plug, str
+ 	snd_pcm_sframes_t frames = size;
+ 	int err;
+ 
+-	frames = snd_pcm_plug_slave_size(plug, frames);
++	frames = plug_slave_size(plug, frames, true);
+ 	if (frames < 0)
+ 		return frames;
+ 
+diff --git a/sound/pci/hda/hda_beep.c b/sound/pci/hda/hda_beep.c
+index f5fd62ed4df5..841523f6b88d 100644
+--- a/sound/pci/hda/hda_beep.c
++++ b/sound/pci/hda/hda_beep.c
+@@ -290,8 +290,12 @@ int snd_hda_mixer_amp_switch_get_beep(struct snd_kcontrol *kcontrol,
+ {
+ 	struct hda_codec *codec = snd_kcontrol_chip(kcontrol);
+ 	struct hda_beep *beep = codec->beep;
++	int chs = get_amp_channels(kcontrol);
++
+ 	if (beep && (!beep->enabled || !ctl_has_mute(kcontrol))) {
+-		ucontrol->value.integer.value[0] =
++		if (chs & 1)
++			ucontrol->value.integer.value[0] = beep->enabled;
++		if (chs & 2)
+ 			ucontrol->value.integer.value[1] = beep->enabled;
+ 		return 0;
+ 	}
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 92a042e34d3e..bd093593f8fb 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -2076,6 +2076,17 @@ static void pcm_mmap_prepare(struct snd_pcm_substream *substream,
+ #endif
+ }
+ 
++/* Blacklist for skipping the whole probe:
++ * some HD-audio PCI entries are exposed without any codecs, and such devices
++ * should be ignored from the beginning.
++ */
++static const struct snd_pci_quirk driver_blacklist[] = {
++	SND_PCI_QUIRK(0x1043, 0x874f, "ASUS ROG Zenith II / Strix", 0),
++	SND_PCI_QUIRK(0x1462, 0xcb59, "MSI TRX40 Creator", 0),
++	SND_PCI_QUIRK(0x1462, 0xcb60, "MSI TRX40", 0),
++	{}
++};
++
+ static const struct hda_controller_ops pci_hda_ops = {
+ 	.disable_msi_reset_irq = disable_msi_reset_irq,
+ 	.pcm_mmap_prepare = pcm_mmap_prepare,
+@@ -2092,6 +2103,11 @@ static int azx_probe(struct pci_dev *pci,
+ 	bool schedule_probe;
+ 	int err;
+ 
++	if (snd_pci_quirk_lookup(pci, driver_blacklist)) {
++		dev_info(&pci->dev, "Skipping the blacklisted device\n");
++		return -ENODEV;
++	}
++
+ 	if (dev >= SNDRV_CARDS)
+ 		return -ENODEV;
+ 	if (!enable[dev]) {
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 63e1a56f705b..f57716d48557 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -107,6 +107,7 @@ struct alc_spec {
+ 	unsigned int done_hp_init:1;
+ 	unsigned int no_shutup_pins:1;
+ 	unsigned int ultra_low_power:1;
++	unsigned int has_hs_key:1;
+ 
+ 	/* for PLL fix */
+ 	hda_nid_t pll_nid;
+@@ -367,7 +368,9 @@ static void alc_fill_eapd_coef(struct hda_codec *codec)
+ 	case 0x10ec0215:
+ 	case 0x10ec0233:
+ 	case 0x10ec0235:
++	case 0x10ec0236:
+ 	case 0x10ec0255:
++	case 0x10ec0256:
+ 	case 0x10ec0257:
+ 	case 0x10ec0282:
+ 	case 0x10ec0283:
+@@ -379,11 +382,6 @@ static void alc_fill_eapd_coef(struct hda_codec *codec)
+ 	case 0x10ec0300:
+ 		alc_update_coef_idx(codec, 0x10, 1<<9, 0);
+ 		break;
+-	case 0x10ec0236:
+-	case 0x10ec0256:
+-		alc_write_coef_idx(codec, 0x36, 0x5757);
+-		alc_update_coef_idx(codec, 0x10, 1<<9, 0);
+-		break;
+ 	case 0x10ec0275:
+ 		alc_update_coef_idx(codec, 0xe, 0, 1<<0);
+ 		break;
+@@ -2449,6 +2447,7 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1458, 0xa0b8, "Gigabyte AZ370-Gaming", ALC1220_FIXUP_GB_DUAL_CODECS),
+ 	SND_PCI_QUIRK(0x1458, 0xa0cd, "Gigabyte X570 Aorus Master", ALC1220_FIXUP_CLEVO_P950),
+ 	SND_PCI_QUIRK(0x1462, 0x1228, "MSI-GP63", ALC1220_FIXUP_CLEVO_P950),
++	SND_PCI_QUIRK(0x1462, 0x1275, "MSI-GL63", ALC1220_FIXUP_CLEVO_P950),
+ 	SND_PCI_QUIRK(0x1462, 0x1276, "MSI-GL73", ALC1220_FIXUP_CLEVO_P950),
+ 	SND_PCI_QUIRK(0x1462, 0x1293, "MSI-GP65", ALC1220_FIXUP_CLEVO_P950),
+ 	SND_PCI_QUIRK(0x1462, 0x7350, "MSI-7350", ALC889_FIXUP_CD),
+@@ -2982,6 +2981,107 @@ static int alc269_parse_auto_config(struct hda_codec *codec)
+ 	return alc_parse_auto_config(codec, alc269_ignore, ssids);
+ }
+ 
++static const struct hda_jack_keymap alc_headset_btn_keymap[] = {
++	{ SND_JACK_BTN_0, KEY_PLAYPAUSE },
++	{ SND_JACK_BTN_1, KEY_VOICECOMMAND },
++	{ SND_JACK_BTN_2, KEY_VOLUMEUP },
++	{ SND_JACK_BTN_3, KEY_VOLUMEDOWN },
++	{}
++};
++
++static void alc_headset_btn_callback(struct hda_codec *codec,
++				     struct hda_jack_callback *jack)
++{
++	int report = 0;
++
++	if (jack->unsol_res & (7 << 13))
++		report |= SND_JACK_BTN_0;
++
++	if (jack->unsol_res  & (1 << 16 | 3 << 8))
++		report |= SND_JACK_BTN_1;
++
++	/* Volume up key */
++	if (jack->unsol_res & (7 << 23))
++		report |= SND_JACK_BTN_2;
++
++	/* Volume down key */
++	if (jack->unsol_res & (7 << 10))
++		report |= SND_JACK_BTN_3;
++
++	jack->jack->button_state = report;
++}
++
++static void alc_disable_headset_jack_key(struct hda_codec *codec)
++{
++	struct alc_spec *spec = codec->spec;
++
++	if (!spec->has_hs_key)
++		return;
++
++	switch (codec->core.vendor_id) {
++	case 0x10ec0215:
++	case 0x10ec0225:
++	case 0x10ec0285:
++	case 0x10ec0295:
++	case 0x10ec0289:
++	case 0x10ec0299:
++		alc_write_coef_idx(codec, 0x48, 0x0);
++		alc_update_coef_idx(codec, 0x49, 0x0045, 0x0);
++		alc_update_coef_idx(codec, 0x44, 0x0045 << 8, 0x0);
++		break;
++	case 0x10ec0236:
++	case 0x10ec0256:
++		alc_write_coef_idx(codec, 0x48, 0x0);
++		alc_update_coef_idx(codec, 0x49, 0x0045, 0x0);
++		break;
++	}
++}
++
++static void alc_enable_headset_jack_key(struct hda_codec *codec)
++{
++	struct alc_spec *spec = codec->spec;
++
++	if (!spec->has_hs_key)
++		return;
++
++	switch (codec->core.vendor_id) {
++	case 0x10ec0215:
++	case 0x10ec0225:
++	case 0x10ec0285:
++	case 0x10ec0295:
++	case 0x10ec0289:
++	case 0x10ec0299:
++		alc_write_coef_idx(codec, 0x48, 0xd011);
++		alc_update_coef_idx(codec, 0x49, 0x007f, 0x0045);
++		alc_update_coef_idx(codec, 0x44, 0x007f << 8, 0x0045 << 8);
++		break;
++	case 0x10ec0236:
++	case 0x10ec0256:
++		alc_write_coef_idx(codec, 0x48, 0xd011);
++		alc_update_coef_idx(codec, 0x49, 0x007f, 0x0045);
++		break;
++	}
++}
++
++static void alc_fixup_headset_jack(struct hda_codec *codec,
++				    const struct hda_fixup *fix, int action)
++{
++	struct alc_spec *spec = codec->spec;
++
++	switch (action) {
++	case HDA_FIXUP_ACT_PRE_PROBE:
++		spec->has_hs_key = 1;
++		snd_hda_jack_detect_enable_callback(codec, 0x55,
++						    alc_headset_btn_callback);
++		snd_hda_jack_add_kctl(codec, 0x55, "Headset Jack", false,
++				      SND_JACK_HEADSET, alc_headset_btn_keymap);
++		break;
++	case HDA_FIXUP_ACT_INIT:
++		alc_enable_headset_jack_key(codec);
++		break;
++	}
++}
++
+ static void alc269vb_toggle_power_output(struct hda_codec *codec, int power_up)
+ {
+ 	alc_update_coef_idx(codec, 0x04, 1 << 11, power_up ? (1 << 11) : 0);
+@@ -3269,7 +3369,13 @@ static void alc256_init(struct hda_codec *codec)
+ 	alc_update_coefex_idx(codec, 0x57, 0x04, 0x0007, 0x4); /* Hight power */
+ 	alc_update_coefex_idx(codec, 0x53, 0x02, 0x8000, 1 << 15); /* Clear bit */
+ 	alc_update_coefex_idx(codec, 0x53, 0x02, 0x8000, 0 << 15);
+-	alc_update_coef_idx(codec, 0x36, 1 << 13, 1 << 5); /* Switch pcbeep path to Line in path*/
++	/*
++	 * Expose headphone mic (or possibly Line In on some machines) instead
++	 * of PC Beep on 1Ah, and disable 1Ah loopback for all outputs. See
++	 * Documentation/sound/hd-audio/realtek-pc-beep.rst for details of
++	 * this register.
++	 */
++	alc_write_coef_idx(codec, 0x36, 0x5757);
+ }
+ 
+ static void alc256_shutup(struct hda_codec *codec)
+@@ -3372,6 +3478,8 @@ static void alc225_shutup(struct hda_codec *codec)
+ 
+ 	if (!hp_pin)
+ 		hp_pin = 0x21;
++
++	alc_disable_headset_jack_key(codec);
+ 	/* 3k pull low control for Headset jack. */
+ 	alc_update_coef_idx(codec, 0x4a, 0, 3 << 10);
+ 
+@@ -3411,6 +3519,9 @@ static void alc225_shutup(struct hda_codec *codec)
+ 		alc_update_coef_idx(codec, 0x4a, 3<<4, 2<<4);
+ 		msleep(30);
+ 	}
++
++	alc_update_coef_idx(codec, 0x4a, 3 << 10, 0);
++	alc_enable_headset_jack_key(codec);
+ }
+ 
+ static void alc_default_init(struct hda_codec *codec)
+@@ -4008,6 +4119,12 @@ static void alc269_fixup_hp_gpio_led(struct hda_codec *codec,
+ 	alc_fixup_hp_gpio_led(codec, action, 0x08, 0x10);
+ }
+ 
++static void alc285_fixup_hp_gpio_led(struct hda_codec *codec,
++				const struct hda_fixup *fix, int action)
++{
++	alc_fixup_hp_gpio_led(codec, action, 0x04, 0x00);
++}
++
+ static void alc286_fixup_hp_gpio_led(struct hda_codec *codec,
+ 				const struct hda_fixup *fix, int action)
+ {
+@@ -5375,17 +5492,6 @@ static void alc271_hp_gate_mic_jack(struct hda_codec *codec,
+ 	}
+ }
+ 
+-static void alc256_fixup_dell_xps_13_headphone_noise2(struct hda_codec *codec,
+-						      const struct hda_fixup *fix,
+-						      int action)
+-{
+-	if (action != HDA_FIXUP_ACT_PRE_PROBE)
+-		return;
+-
+-	snd_hda_codec_amp_stereo(codec, 0x1a, HDA_INPUT, 0, HDA_AMP_VOLMASK, 1);
+-	snd_hda_override_wcaps(codec, 0x1a, get_wcaps(codec, 0x1a) & ~AC_WCAP_IN_AMP);
+-}
+-
+ static void alc269_fixup_limit_int_mic_boost(struct hda_codec *codec,
+ 					     const struct hda_fixup *fix,
+ 					     int action)
+@@ -5662,69 +5768,6 @@ static void alc285_fixup_invalidate_dacs(struct hda_codec *codec,
+ 	snd_hda_override_wcaps(codec, 0x03, 0);
+ }
+ 
+-static const struct hda_jack_keymap alc_headset_btn_keymap[] = {
+-	{ SND_JACK_BTN_0, KEY_PLAYPAUSE },
+-	{ SND_JACK_BTN_1, KEY_VOICECOMMAND },
+-	{ SND_JACK_BTN_2, KEY_VOLUMEUP },
+-	{ SND_JACK_BTN_3, KEY_VOLUMEDOWN },
+-	{}
+-};
+-
+-static void alc_headset_btn_callback(struct hda_codec *codec,
+-				     struct hda_jack_callback *jack)
+-{
+-	int report = 0;
+-
+-	if (jack->unsol_res & (7 << 13))
+-		report |= SND_JACK_BTN_0;
+-
+-	if (jack->unsol_res  & (1 << 16 | 3 << 8))
+-		report |= SND_JACK_BTN_1;
+-
+-	/* Volume up key */
+-	if (jack->unsol_res & (7 << 23))
+-		report |= SND_JACK_BTN_2;
+-
+-	/* Volume down key */
+-	if (jack->unsol_res & (7 << 10))
+-		report |= SND_JACK_BTN_3;
+-
+-	jack->jack->button_state = report;
+-}
+-
+-static void alc_fixup_headset_jack(struct hda_codec *codec,
+-				    const struct hda_fixup *fix, int action)
+-{
+-
+-	switch (action) {
+-	case HDA_FIXUP_ACT_PRE_PROBE:
+-		snd_hda_jack_detect_enable_callback(codec, 0x55,
+-						    alc_headset_btn_callback);
+-		snd_hda_jack_add_kctl(codec, 0x55, "Headset Jack", false,
+-				      SND_JACK_HEADSET, alc_headset_btn_keymap);
+-		break;
+-	case HDA_FIXUP_ACT_INIT:
+-		switch (codec->core.vendor_id) {
+-		case 0x10ec0215:
+-		case 0x10ec0225:
+-		case 0x10ec0285:
+-		case 0x10ec0295:
+-		case 0x10ec0289:
+-		case 0x10ec0299:
+-			alc_write_coef_idx(codec, 0x48, 0xd011);
+-			alc_update_coef_idx(codec, 0x49, 0x007f, 0x0045);
+-			alc_update_coef_idx(codec, 0x44, 0x007f << 8, 0x0045 << 8);
+-			break;
+-		case 0x10ec0236:
+-		case 0x10ec0256:
+-			alc_write_coef_idx(codec, 0x48, 0xd011);
+-			alc_update_coef_idx(codec, 0x49, 0x007f, 0x0045);
+-			break;
+-		}
+-		break;
+-	}
+-}
+-
+ static void alc295_fixup_chromebook(struct hda_codec *codec,
+ 				    const struct hda_fixup *fix, int action)
+ {
+@@ -5863,8 +5906,6 @@ enum {
+ 	ALC298_FIXUP_DELL1_MIC_NO_PRESENCE,
+ 	ALC298_FIXUP_DELL_AIO_MIC_NO_PRESENCE,
+ 	ALC275_FIXUP_DELL_XPS,
+-	ALC256_FIXUP_DELL_XPS_13_HEADPHONE_NOISE,
+-	ALC256_FIXUP_DELL_XPS_13_HEADPHONE_NOISE2,
+ 	ALC293_FIXUP_LENOVO_SPK_NOISE,
+ 	ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY,
+ 	ALC255_FIXUP_DELL_SPK_NOISE,
+@@ -5923,6 +5964,7 @@ enum {
+ 	ALC294_FIXUP_ASUS_DUAL_SPK,
+ 	ALC285_FIXUP_THINKPAD_HEADSET_JACK,
+ 	ALC294_FIXUP_ASUS_HPE,
++	ALC285_FIXUP_HP_GPIO_LED,
+ };
+ 
+ static const struct hda_fixup alc269_fixups[] = {
+@@ -6604,23 +6646,6 @@ static const struct hda_fixup alc269_fixups[] = {
+ 			{}
+ 		}
+ 	},
+-	[ALC256_FIXUP_DELL_XPS_13_HEADPHONE_NOISE] = {
+-		.type = HDA_FIXUP_VERBS,
+-		.v.verbs = (const struct hda_verb[]) {
+-			/* Disable pass-through path for FRONT 14h */
+-			{0x20, AC_VERB_SET_COEF_INDEX, 0x36},
+-			{0x20, AC_VERB_SET_PROC_COEF, 0x1737},
+-			{}
+-		},
+-		.chained = true,
+-		.chain_id = ALC255_FIXUP_DELL1_MIC_NO_PRESENCE
+-	},
+-	[ALC256_FIXUP_DELL_XPS_13_HEADPHONE_NOISE2] = {
+-		.type = HDA_FIXUP_FUNC,
+-		.v.func = alc256_fixup_dell_xps_13_headphone_noise2,
+-		.chained = true,
+-		.chain_id = ALC256_FIXUP_DELL_XPS_13_HEADPHONE_NOISE
+-	},
+ 	[ALC293_FIXUP_LENOVO_SPK_NOISE] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc_fixup_disable_aamix,
+@@ -7061,6 +7086,10 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.chained = true,
+ 		.chain_id = ALC294_FIXUP_ASUS_HEADSET_MIC
+ 	},
++	[ALC285_FIXUP_HP_GPIO_LED] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc285_fixup_hp_gpio_led,
++	},
+ };
+ 
+ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+@@ -7114,17 +7143,14 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1028, 0x06de, "Dell", ALC293_FIXUP_DISABLE_AAMIX_MULTIJACK),
+ 	SND_PCI_QUIRK(0x1028, 0x06df, "Dell", ALC293_FIXUP_DISABLE_AAMIX_MULTIJACK),
+ 	SND_PCI_QUIRK(0x1028, 0x06e0, "Dell", ALC293_FIXUP_DISABLE_AAMIX_MULTIJACK),
+-	SND_PCI_QUIRK(0x1028, 0x0704, "Dell XPS 13 9350", ALC256_FIXUP_DELL_XPS_13_HEADPHONE_NOISE2),
+ 	SND_PCI_QUIRK(0x1028, 0x0706, "Dell Inspiron 7559", ALC256_FIXUP_DELL_INSPIRON_7559_SUBWOOFER),
+ 	SND_PCI_QUIRK(0x1028, 0x0725, "Dell Inspiron 3162", ALC255_FIXUP_DELL_SPK_NOISE),
+ 	SND_PCI_QUIRK(0x1028, 0x0738, "Dell Precision 5820", ALC269_FIXUP_NO_SHUTUP),
+-	SND_PCI_QUIRK(0x1028, 0x075b, "Dell XPS 13 9360", ALC256_FIXUP_DELL_XPS_13_HEADPHONE_NOISE2),
+ 	SND_PCI_QUIRK(0x1028, 0x075c, "Dell XPS 27 7760", ALC298_FIXUP_SPK_VOLUME),
+ 	SND_PCI_QUIRK(0x1028, 0x075d, "Dell AIO", ALC298_FIXUP_SPK_VOLUME),
+ 	SND_PCI_QUIRK(0x1028, 0x07b0, "Dell Precision 7520", ALC295_FIXUP_DISABLE_DAC3),
+ 	SND_PCI_QUIRK(0x1028, 0x0798, "Dell Inspiron 17 7000 Gaming", ALC256_FIXUP_DELL_INSPIRON_7559_SUBWOOFER),
+ 	SND_PCI_QUIRK(0x1028, 0x080c, "Dell WYSE", ALC225_FIXUP_DELL_WYSE_MIC_NO_PRESENCE),
+-	SND_PCI_QUIRK(0x1028, 0x082a, "Dell XPS 13 9360", ALC256_FIXUP_DELL_XPS_13_HEADPHONE_NOISE2),
+ 	SND_PCI_QUIRK(0x1028, 0x084b, "Dell", ALC274_FIXUP_DELL_AIO_LINEOUT_VERB),
+ 	SND_PCI_QUIRK(0x1028, 0x084e, "Dell", ALC274_FIXUP_DELL_AIO_LINEOUT_VERB),
+ 	SND_PCI_QUIRK(0x1028, 0x0871, "Dell Precision 3630", ALC255_FIXUP_DELL_HEADSET_MIC),
+@@ -7208,6 +7234,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x83b9, "HP Spectre x360", ALC269_FIXUP_HP_MUTE_LED_MIC3),
+ 	SND_PCI_QUIRK(0x103c, 0x8497, "HP Envy x360", ALC269_FIXUP_HP_MUTE_LED_MIC3),
+ 	SND_PCI_QUIRK(0x103c, 0x84e7, "HP Pavilion 15", ALC269_FIXUP_HP_MUTE_LED_MIC3),
++	SND_PCI_QUIRK(0x103c, 0x8736, "HP", ALC285_FIXUP_HP_GPIO_LED),
+ 	SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300),
+ 	SND_PCI_QUIRK(0x1043, 0x106d, "Asus K53BE", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+@@ -7299,6 +7326,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x225d, "Thinkpad T480", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+ 	SND_PCI_QUIRK(0x17aa, 0x2292, "Thinkpad X1 Yoga 7th", ALC285_FIXUP_THINKPAD_HEADSET_JACK),
+ 	SND_PCI_QUIRK(0x17aa, 0x2293, "Thinkpad X1 Carbon 7th", ALC285_FIXUP_THINKPAD_HEADSET_JACK),
++	SND_PCI_QUIRK(0x17aa, 0x22be, "Thinkpad X1 Carbon 8th", ALC285_FIXUP_THINKPAD_HEADSET_JACK),
+ 	SND_PCI_QUIRK(0x17aa, 0x30bb, "ThinkCentre AIO", ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY),
+ 	SND_PCI_QUIRK(0x17aa, 0x30e2, "ThinkCentre AIO", ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY),
+ 	SND_PCI_QUIRK(0x17aa, 0x310c, "ThinkCentre Station", ALC294_FIXUP_LENOVO_MIC_LOCATION),
+@@ -7477,7 +7505,6 @@ static const struct hda_model_fixup alc269_fixup_models[] = {
+ 	{.id = ALC298_FIXUP_DELL1_MIC_NO_PRESENCE, .name = "alc298-dell1"},
+ 	{.id = ALC298_FIXUP_DELL_AIO_MIC_NO_PRESENCE, .name = "alc298-dell-aio"},
+ 	{.id = ALC275_FIXUP_DELL_XPS, .name = "alc275-dell-xps"},
+-	{.id = ALC256_FIXUP_DELL_XPS_13_HEADPHONE_NOISE, .name = "alc256-dell-xps13"},
+ 	{.id = ALC293_FIXUP_LENOVO_SPK_NOISE, .name = "lenovo-spk-noise"},
+ 	{.id = ALC233_FIXUP_LENOVO_LINE2_MIC_HOTKEY, .name = "lenovo-hotkey"},
+ 	{.id = ALC255_FIXUP_DELL_SPK_NOISE, .name = "dell-spk-noise"},
+diff --git a/sound/pci/ice1712/prodigy_hifi.c b/sound/pci/ice1712/prodigy_hifi.c
+index 91f83cef0e56..9aa12a67d370 100644
+--- a/sound/pci/ice1712/prodigy_hifi.c
++++ b/sound/pci/ice1712/prodigy_hifi.c
+@@ -536,7 +536,7 @@ static int wm_adc_mux_enum_get(struct snd_kcontrol *kcontrol,
+ 	struct snd_ice1712 *ice = snd_kcontrol_chip(kcontrol);
+ 
+ 	mutex_lock(&ice->gpio_mutex);
+-	ucontrol->value.integer.value[0] = wm_get(ice, WM_ADC_MUX) & 0x1f;
++	ucontrol->value.enumerated.item[0] = wm_get(ice, WM_ADC_MUX) & 0x1f;
+ 	mutex_unlock(&ice->gpio_mutex);
+ 	return 0;
+ }
+@@ -550,7 +550,7 @@ static int wm_adc_mux_enum_put(struct snd_kcontrol *kcontrol,
+ 
+ 	mutex_lock(&ice->gpio_mutex);
+ 	oval = wm_get(ice, WM_ADC_MUX);
+-	nval = (oval & 0xe0) | ucontrol->value.integer.value[0];
++	nval = (oval & 0xe0) | ucontrol->value.enumerated.item[0];
+ 	if (nval != oval) {
+ 		wm_put(ice, WM_ADC_MUX, nval);
+ 		change = 1;
+diff --git a/sound/soc/codecs/cs4270.c b/sound/soc/codecs/cs4270.c
+index 5f25b9f872bd..8a02791e44ad 100644
+--- a/sound/soc/codecs/cs4270.c
++++ b/sound/soc/codecs/cs4270.c
+@@ -137,6 +137,9 @@ struct cs4270_private {
+ 
+ 	/* power domain regulators */
+ 	struct regulator_bulk_data supplies[ARRAY_SIZE(supply_names)];
++
++	/* reset gpio */
++	struct gpio_desc *reset_gpio;
+ };
+ 
+ static const struct snd_soc_dapm_widget cs4270_dapm_widgets[] = {
+@@ -648,6 +651,22 @@ static const struct regmap_config cs4270_regmap = {
+ 	.volatile_reg =		cs4270_reg_is_volatile,
+ };
+ 
++/**
++ * cs4270_i2c_remove - deinitialize the I2C interface of the CS4270
++ * @i2c_client: the I2C client object
++ *
++ * This function puts the chip into low power mode when the i2c device
++ * is removed.
++ */
++static int cs4270_i2c_remove(struct i2c_client *i2c_client)
++{
++	struct cs4270_private *cs4270 = i2c_get_clientdata(i2c_client);
++
++	gpiod_set_value_cansleep(cs4270->reset_gpio, 0);
++
++	return 0;
++}
++
+ /**
+  * cs4270_i2c_probe - initialize the I2C interface of the CS4270
+  * @i2c_client: the I2C client object
+@@ -660,7 +679,6 @@ static int cs4270_i2c_probe(struct i2c_client *i2c_client,
+ 	const struct i2c_device_id *id)
+ {
+ 	struct cs4270_private *cs4270;
+-	struct gpio_desc *reset_gpiod;
+ 	unsigned int val;
+ 	int ret, i;
+ 
+@@ -679,10 +697,21 @@ static int cs4270_i2c_probe(struct i2c_client *i2c_client,
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	reset_gpiod = devm_gpiod_get_optional(&i2c_client->dev, "reset",
+-					      GPIOD_OUT_HIGH);
+-	if (PTR_ERR(reset_gpiod) == -EPROBE_DEFER)
+-		return -EPROBE_DEFER;
++	/* reset the device */
++	cs4270->reset_gpio = devm_gpiod_get_optional(&i2c_client->dev, "reset",
++						     GPIOD_OUT_LOW);
++	if (IS_ERR(cs4270->reset_gpio)) {
++		dev_dbg(&i2c_client->dev, "Error getting CS4270 reset GPIO\n");
++		return PTR_ERR(cs4270->reset_gpio);
++	}
++
++	if (cs4270->reset_gpio) {
++		dev_dbg(&i2c_client->dev, "Found reset GPIO\n");
++		gpiod_set_value_cansleep(cs4270->reset_gpio, 1);
++	}
++
++	/* Sleep 500ns before i2c communications */
++	ndelay(500);
+ 
+ 	cs4270->regmap = devm_regmap_init_i2c(i2c_client, &cs4270_regmap);
+ 	if (IS_ERR(cs4270->regmap))
+@@ -735,6 +764,7 @@ static struct i2c_driver cs4270_i2c_driver = {
+ 	},
+ 	.id_table = cs4270_id,
+ 	.probe = cs4270_i2c_probe,
++	.remove = cs4270_i2c_remove,
+ };
+ 
+ module_i2c_driver(cs4270_i2c_driver);
+diff --git a/sound/soc/soc-dapm.c b/sound/soc/soc-dapm.c
+index 9fb54e6fe254..17962564866d 100644
+--- a/sound/soc/soc-dapm.c
++++ b/sound/soc/soc-dapm.c
+@@ -802,7 +802,13 @@ static void dapm_set_mixer_path_status(struct snd_soc_dapm_path *p, int i,
+ 			val = max - val;
+ 		p->connect = !!val;
+ 	} else {
+-		p->connect = 0;
++		/* since a virtual mixer has no backing registers to
++		 * decide which path to connect, it will try to match
++		 * with initial state.  This is to ensure
++		 * that the default mixer choice will be
++		 * correctly powered up during initialization.
++		 */
++		p->connect = invert;
+ 	}
+ }
+ 
+diff --git a/sound/soc/soc-ops.c b/sound/soc/soc-ops.c
+index 652657dc6809..55ffb34be95e 100644
+--- a/sound/soc/soc-ops.c
++++ b/sound/soc/soc-ops.c
+@@ -825,7 +825,7 @@ int snd_soc_get_xr_sx(struct snd_kcontrol *kcontrol,
+ 	unsigned int regbase = mc->regbase;
+ 	unsigned int regcount = mc->regcount;
+ 	unsigned int regwshift = component->val_bytes * BITS_PER_BYTE;
+-	unsigned int regwmask = (1<<regwshift)-1;
++	unsigned int regwmask = (1UL<<regwshift)-1;
+ 	unsigned int invert = mc->invert;
+ 	unsigned long mask = (1UL<<mc->nbits)-1;
+ 	long min = mc->min;
+@@ -874,7 +874,7 @@ int snd_soc_put_xr_sx(struct snd_kcontrol *kcontrol,
+ 	unsigned int regbase = mc->regbase;
+ 	unsigned int regcount = mc->regcount;
+ 	unsigned int regwshift = component->val_bytes * BITS_PER_BYTE;
+-	unsigned int regwmask = (1<<regwshift)-1;
++	unsigned int regwmask = (1UL<<regwshift)-1;
+ 	unsigned int invert = mc->invert;
+ 	unsigned long mask = (1UL<<mc->nbits)-1;
+ 	long max = mc->max;
+diff --git a/sound/soc/soc-pcm.c b/sound/soc/soc-pcm.c
+index 2c59b3688ca0..8f6f0ad50288 100644
+--- a/sound/soc/soc-pcm.c
++++ b/sound/soc/soc-pcm.c
+@@ -2236,7 +2236,8 @@ int dpcm_be_dai_trigger(struct snd_soc_pcm_runtime *fe, int stream,
+ 		switch (cmd) {
+ 		case SNDRV_PCM_TRIGGER_START:
+ 			if ((be->dpcm[stream].state != SND_SOC_DPCM_STATE_PREPARE) &&
+-			    (be->dpcm[stream].state != SND_SOC_DPCM_STATE_STOP))
++			    (be->dpcm[stream].state != SND_SOC_DPCM_STATE_STOP) &&
++			    (be->dpcm[stream].state != SND_SOC_DPCM_STATE_PAUSED))
+ 				continue;
+ 
+ 			ret = dpcm_do_trigger(dpcm, be_substream, cmd);
+@@ -2266,7 +2267,8 @@ int dpcm_be_dai_trigger(struct snd_soc_pcm_runtime *fe, int stream,
+ 			be->dpcm[stream].state = SND_SOC_DPCM_STATE_START;
+ 			break;
+ 		case SNDRV_PCM_TRIGGER_STOP:
+-			if (be->dpcm[stream].state != SND_SOC_DPCM_STATE_START)
++			if ((be->dpcm[stream].state != SND_SOC_DPCM_STATE_START) &&
++			    (be->dpcm[stream].state != SND_SOC_DPCM_STATE_PAUSED))
+ 				continue;
+ 
+ 			if (!snd_soc_dpcm_can_be_free_stop(fe, be, stream))
+diff --git a/sound/soc/soc-topology.c b/sound/soc/soc-topology.c
+index 575da6aba807..a152409e8746 100644
+--- a/sound/soc/soc-topology.c
++++ b/sound/soc/soc-topology.c
+@@ -362,7 +362,7 @@ static int soc_tplg_add_kcontrol(struct soc_tplg *tplg,
+ 	struct snd_soc_component *comp = tplg->comp;
+ 
+ 	return soc_tplg_add_dcontrol(comp->card->snd_card,
+-				comp->dev, k, NULL, comp, kcontrol);
++				comp->dev, k, comp->name_prefix, comp, kcontrol);
+ }
+ 
+ /* remove a mixer kcontrol */
+diff --git a/sound/soc/stm/stm32_sai_sub.c b/sound/soc/stm/stm32_sai_sub.c
+index 10eb4b8e8e7e..d3259de43712 100644
+--- a/sound/soc/stm/stm32_sai_sub.c
++++ b/sound/soc/stm/stm32_sai_sub.c
+@@ -1551,8 +1551,10 @@ static int stm32_sai_sub_probe(struct platform_device *pdev)
+ 
+ 	ret = snd_soc_register_component(&pdev->dev, &stm32_component,
+ 					 &sai->cpu_dai_drv, 1);
+-	if (ret)
++	if (ret) {
++		snd_dmaengine_pcm_unregister(&pdev->dev);
+ 		return ret;
++	}
+ 
+ 	if (STM_SAI_PROTOCOL_IS_SPDIF(sai))
+ 		conf = &stm32_sai_pcm_config_spdif;
+diff --git a/sound/usb/mixer_maps.c b/sound/usb/mixer_maps.c
+index 5ebca8013840..72b575c34860 100644
+--- a/sound/usb/mixer_maps.c
++++ b/sound/usb/mixer_maps.c
+@@ -359,6 +359,14 @@ static const struct usbmix_name_map corsair_virtuoso_map[] = {
+ 	{ 0 }
+ };
+ 
++/* Some mobos shipped with a dummy HD-audio show the invalid GET_MIN/GET_MAX
++ * response for Input Gain Pad (id=19, control=12).  Skip it.
++ */
++static const struct usbmix_name_map asus_rog_map[] = {
++	{ 19, NULL, 12 }, /* FU, Input Gain Pad */
++	{}
++};
++
+ /*
+  * Control map entries
+  */
+@@ -488,6 +496,26 @@ static const struct usbmix_ctl_map usbmix_ctl_maps[] = {
+ 		.id = USB_ID(0x1b1c, 0x0a42),
+ 		.map = corsair_virtuoso_map,
+ 	},
++	{	/* Gigabyte TRX40 Aorus Pro WiFi */
++		.id = USB_ID(0x0414, 0xa002),
++		.map = asus_rog_map,
++	},
++	{	/* ASUS ROG Zenith II */
++		.id = USB_ID(0x0b05, 0x1916),
++		.map = asus_rog_map,
++	},
++	{	/* ASUS ROG Strix */
++		.id = USB_ID(0x0b05, 0x1917),
++		.map = asus_rog_map,
++	},
++	{	/* MSI TRX40 Creator */
++		.id = USB_ID(0x0db0, 0x0d64),
++		.map = asus_rog_map,
++	},
++	{	/* MSI TRX40 */
++		.id = USB_ID(0x0db0, 0x543d),
++		.map = asus_rog_map,
++	},
+ 	{ 0 } /* terminator */
+ };
+ 
+diff --git a/tools/gpio/Makefile b/tools/gpio/Makefile
+index 6080de58861f..6289b8d20dff 100644
+--- a/tools/gpio/Makefile
++++ b/tools/gpio/Makefile
+@@ -35,7 +35,7 @@ $(OUTPUT)include/linux/gpio.h: ../../include/uapi/linux/gpio.h
+ 
+ prepare: $(OUTPUT)include/linux/gpio.h
+ 
+-GPIO_UTILS_IN := $(output)gpio-utils-in.o
++GPIO_UTILS_IN := $(OUTPUT)gpio-utils-in.o
+ $(GPIO_UTILS_IN): prepare FORCE
+ 	$(Q)$(MAKE) $(build)=gpio-utils
+ 
+diff --git a/tools/perf/Makefile.config b/tools/perf/Makefile.config
+index 80e55e796be9..5da344dc2cf3 100644
+--- a/tools/perf/Makefile.config
++++ b/tools/perf/Makefile.config
+@@ -228,8 +228,17 @@ strip-libs  = $(filter-out -l%,$(1))
+ 
+ PYTHON_CONFIG_SQ := $(call shell-sq,$(PYTHON_CONFIG))
+ 
++# Python 3.8 changed the output of `python-config --ldflags` to not include the
++# '-lpythonX.Y' flag unless '--embed' is also passed. The feature check for
++# libpython fails if that flag is not included in LDFLAGS
++ifeq ($(shell $(PYTHON_CONFIG_SQ) --ldflags --embed 2>&1 1>/dev/null; echo $$?), 0)
++  PYTHON_CONFIG_LDFLAGS := --ldflags --embed
++else
++  PYTHON_CONFIG_LDFLAGS := --ldflags
++endif
++
+ ifdef PYTHON_CONFIG
+-  PYTHON_EMBED_LDOPTS := $(shell $(PYTHON_CONFIG_SQ) --ldflags 2>/dev/null)
++  PYTHON_EMBED_LDOPTS := $(shell $(PYTHON_CONFIG_SQ) $(PYTHON_CONFIG_LDFLAGS) 2>/dev/null)
+   PYTHON_EMBED_LDFLAGS := $(call strip-libs,$(PYTHON_EMBED_LDOPTS))
+   PYTHON_EMBED_LIBADD := $(call grep-libs,$(PYTHON_EMBED_LDOPTS)) -lutil
+   PYTHON_EMBED_CCOPTS := $(shell $(PYTHON_CONFIG_SQ) --includes 2>/dev/null)
+diff --git a/tools/testing/radix-tree/Makefile b/tools/testing/radix-tree/Makefile
+index 397d6b612502..aa6abfe0749c 100644
+--- a/tools/testing/radix-tree/Makefile
++++ b/tools/testing/radix-tree/Makefile
+@@ -7,8 +7,8 @@ LDLIBS+= -lpthread -lurcu
+ TARGETS = main idr-test multiorder xarray
+ CORE_OFILES := xarray.o radix-tree.o idr.o linux.o test.o find_bit.o bitmap.o
+ OFILES = main.o $(CORE_OFILES) regression1.o regression2.o regression3.o \
+-	 regression4.o \
+-	 tag_check.o multiorder.o idr-test.o iteration_check.o benchmark.o
++	 regression4.o tag_check.o multiorder.o idr-test.o iteration_check.o \
++	 iteration_check_2.o benchmark.o
+ 
+ ifndef SHIFT
+ 	SHIFT=3
+diff --git a/tools/testing/radix-tree/iteration_check_2.c b/tools/testing/radix-tree/iteration_check_2.c
+new file mode 100644
+index 000000000000..aac5c50a3674
+--- /dev/null
++++ b/tools/testing/radix-tree/iteration_check_2.c
+@@ -0,0 +1,87 @@
++// SPDX-License-Identifier: GPL-2.0-or-later
++/*
++ * iteration_check_2.c: Check that deleting a tagged entry doesn't cause
++ * an RCU walker to finish early.
++ * Copyright (c) 2020 Oracle
++ * Author: Matthew Wilcox <willy@infradead.org>
++ */
++#include <pthread.h>
++#include "test.h"
++
++static volatile bool test_complete;
++
++static void *iterator(void *arg)
++{
++	XA_STATE(xas, arg, 0);
++	void *entry;
++
++	rcu_register_thread();
++
++	while (!test_complete) {
++		xas_set(&xas, 0);
++		rcu_read_lock();
++		xas_for_each_marked(&xas, entry, ULONG_MAX, XA_MARK_0)
++			;
++		rcu_read_unlock();
++		assert(xas.xa_index >= 100);
++	}
++
++	rcu_unregister_thread();
++	return NULL;
++}
++
++static void *throbber(void *arg)
++{
++	struct xarray *xa = arg;
++
++	rcu_register_thread();
++
++	while (!test_complete) {
++		int i;
++
++		for (i = 0; i < 100; i++) {
++			xa_store(xa, i, xa_mk_value(i), GFP_KERNEL);
++			xa_set_mark(xa, i, XA_MARK_0);
++		}
++		for (i = 0; i < 100; i++)
++			xa_erase(xa, i);
++	}
++
++	rcu_unregister_thread();
++	return NULL;
++}
++
++void iteration_test2(unsigned test_duration)
++{
++	pthread_t threads[2];
++	DEFINE_XARRAY(array);
++	int i;
++
++	printv(1, "Running iteration test 2 for %d seconds\n", test_duration);
++
++	test_complete = false;
++
++	xa_store(&array, 100, xa_mk_value(100), GFP_KERNEL);
++	xa_set_mark(&array, 100, XA_MARK_0);
++
++	if (pthread_create(&threads[0], NULL, iterator, &array)) {
++		perror("create iterator thread");
++		exit(1);
++	}
++	if (pthread_create(&threads[1], NULL, throbber, &array)) {
++		perror("create throbber thread");
++		exit(1);
++	}
++
++	sleep(test_duration);
++	test_complete = true;
++
++	for (i = 0; i < 2; i++) {
++		if (pthread_join(threads[i], NULL)) {
++			perror("pthread_join");
++			exit(1);
++		}
++	}
++
++	xa_destroy(&array);
++}
+diff --git a/tools/testing/radix-tree/main.c b/tools/testing/radix-tree/main.c
+index 7a22d6e3732e..f2cbc8e5b97c 100644
+--- a/tools/testing/radix-tree/main.c
++++ b/tools/testing/radix-tree/main.c
+@@ -311,6 +311,7 @@ int main(int argc, char **argv)
+ 	regression4_test();
+ 	iteration_test(0, 10 + 90 * long_run);
+ 	iteration_test(7, 10 + 90 * long_run);
++	iteration_test2(10 + 90 * long_run);
+ 	single_thread_tests(long_run);
+ 
+ 	/* Free any remaining preallocated nodes */
+diff --git a/tools/testing/radix-tree/test.h b/tools/testing/radix-tree/test.h
+index 1ee4b2c0ad10..34dab4d18744 100644
+--- a/tools/testing/radix-tree/test.h
++++ b/tools/testing/radix-tree/test.h
+@@ -34,6 +34,7 @@ void xarray_tests(void);
+ void tag_check(void);
+ void multiorder_checks(void);
+ void iteration_test(unsigned order, unsigned duration);
++void iteration_test2(unsigned duration);
+ void benchmark(void);
+ void idr_checks(void);
+ void ida_tests(void);
+diff --git a/tools/testing/selftests/powerpc/mm/.gitignore b/tools/testing/selftests/powerpc/mm/.gitignore
+index 0ebeaea22641..97f7922c52c5 100644
+--- a/tools/testing/selftests/powerpc/mm/.gitignore
++++ b/tools/testing/selftests/powerpc/mm/.gitignore
+@@ -6,3 +6,4 @@ segv_errors
+ wild_bctr
+ large_vm_fork_separation
+ bad_accesses
++tlbie_test
+diff --git a/tools/testing/selftests/powerpc/pmu/ebb/Makefile b/tools/testing/selftests/powerpc/pmu/ebb/Makefile
+index 417306353e07..ca35dd8848b0 100644
+--- a/tools/testing/selftests/powerpc/pmu/ebb/Makefile
++++ b/tools/testing/selftests/powerpc/pmu/ebb/Makefile
+@@ -7,6 +7,7 @@ noarg:
+ # The EBB handler is 64-bit code and everything links against it
+ CFLAGS += -m64
+ 
++TMPOUT = $(OUTPUT)/
+ # Toolchains may build PIE by default which breaks the assembly
+ no-pie-option := $(call try-run, echo 'int main() { return 0; }' | \
+         $(CC) -Werror $(KBUILD_CPPFLAGS) $(CC_OPTION_CFLAGS) -no-pie -x c - -o "$$TMP", -no-pie)
+diff --git a/tools/testing/selftests/vm/map_hugetlb.c b/tools/testing/selftests/vm/map_hugetlb.c
+index 5a2d7b8efc40..6af951900aa3 100644
+--- a/tools/testing/selftests/vm/map_hugetlb.c
++++ b/tools/testing/selftests/vm/map_hugetlb.c
+@@ -45,20 +45,20 @@ static void check_bytes(char *addr)
+ 	printf("First hex is %x\n", *((unsigned int *)addr));
+ }
+ 
+-static void write_bytes(char *addr)
++static void write_bytes(char *addr, size_t length)
+ {
+ 	unsigned long i;
+ 
+-	for (i = 0; i < LENGTH; i++)
++	for (i = 0; i < length; i++)
+ 		*(addr + i) = (char)i;
+ }
+ 
+-static int read_bytes(char *addr)
++static int read_bytes(char *addr, size_t length)
+ {
+ 	unsigned long i;
+ 
+ 	check_bytes(addr);
+-	for (i = 0; i < LENGTH; i++)
++	for (i = 0; i < length; i++)
+ 		if (*(addr + i) != (char)i) {
+ 			printf("Mismatch at %lu\n", i);
+ 			return 1;
+@@ -96,11 +96,11 @@ int main(int argc, char **argv)
+ 
+ 	printf("Returned address is %p\n", addr);
+ 	check_bytes(addr);
+-	write_bytes(addr);
+-	ret = read_bytes(addr);
++	write_bytes(addr, length);
++	ret = read_bytes(addr, length);
+ 
+ 	/* munmap() length of MAP_HUGETLB memory must be hugepage aligned */
+-	if (munmap(addr, LENGTH)) {
++	if (munmap(addr, length)) {
+ 		perror("munmap");
+ 		exit(1);
+ 	}
+diff --git a/tools/testing/selftests/vm/mlock2-tests.c b/tools/testing/selftests/vm/mlock2-tests.c
+index 637b6d0ac0d0..11b2301f3aa3 100644
+--- a/tools/testing/selftests/vm/mlock2-tests.c
++++ b/tools/testing/selftests/vm/mlock2-tests.c
+@@ -67,59 +67,6 @@ out:
+ 	return ret;
+ }
+ 
+-static uint64_t get_pageflags(unsigned long addr)
+-{
+-	FILE *file;
+-	uint64_t pfn;
+-	unsigned long offset;
+-
+-	file = fopen("/proc/self/pagemap", "r");
+-	if (!file) {
+-		perror("fopen pagemap");
+-		_exit(1);
+-	}
+-
+-	offset = addr / getpagesize() * sizeof(pfn);
+-
+-	if (fseek(file, offset, SEEK_SET)) {
+-		perror("fseek pagemap");
+-		_exit(1);
+-	}
+-
+-	if (fread(&pfn, sizeof(pfn), 1, file) != 1) {
+-		perror("fread pagemap");
+-		_exit(1);
+-	}
+-
+-	fclose(file);
+-	return pfn;
+-}
+-
+-static uint64_t get_kpageflags(unsigned long pfn)
+-{
+-	uint64_t flags;
+-	FILE *file;
+-
+-	file = fopen("/proc/kpageflags", "r");
+-	if (!file) {
+-		perror("fopen kpageflags");
+-		_exit(1);
+-	}
+-
+-	if (fseek(file, pfn * sizeof(flags), SEEK_SET)) {
+-		perror("fseek kpageflags");
+-		_exit(1);
+-	}
+-
+-	if (fread(&flags, sizeof(flags), 1, file) != 1) {
+-		perror("fread kpageflags");
+-		_exit(1);
+-	}
+-
+-	fclose(file);
+-	return flags;
+-}
+-
+ #define VMFLAGS "VmFlags:"
+ 
+ static bool is_vmflag_set(unsigned long addr, const char *vmflag)
+@@ -159,19 +106,13 @@ out:
+ #define RSS  "Rss:"
+ #define LOCKED "lo"
+ 
+-static bool is_vma_lock_on_fault(unsigned long addr)
++static unsigned long get_value_for_name(unsigned long addr, const char *name)
+ {
+-	bool ret = false;
+-	bool locked;
+-	FILE *smaps = NULL;
+-	unsigned long vma_size, vma_rss;
+ 	char *line = NULL;
+-	char *value;
+ 	size_t size = 0;
+-
+-	locked = is_vmflag_set(addr, LOCKED);
+-	if (!locked)
+-		goto out;
++	char *value_ptr;
++	FILE *smaps = NULL;
++	unsigned long value = -1UL;
+ 
+ 	smaps = seek_to_smaps_entry(addr);
+ 	if (!smaps) {
+@@ -180,112 +121,70 @@ static bool is_vma_lock_on_fault(unsigned long addr)
+ 	}
+ 
+ 	while (getline(&line, &size, smaps) > 0) {
+-		if (!strstr(line, SIZE)) {
++		if (!strstr(line, name)) {
+ 			free(line);
+ 			line = NULL;
+ 			size = 0;
+ 			continue;
+ 		}
+ 
+-		value = line + strlen(SIZE);
+-		if (sscanf(value, "%lu kB", &vma_size) < 1) {
++		value_ptr = line + strlen(name);
++		if (sscanf(value_ptr, "%lu kB", &value) < 1) {
+ 			printf("Unable to parse smaps entry for Size\n");
+ 			goto out;
+ 		}
+ 		break;
+ 	}
+ 
+-	while (getline(&line, &size, smaps) > 0) {
+-		if (!strstr(line, RSS)) {
+-			free(line);
+-			line = NULL;
+-			size = 0;
+-			continue;
+-		}
+-
+-		value = line + strlen(RSS);
+-		if (sscanf(value, "%lu kB", &vma_rss) < 1) {
+-			printf("Unable to parse smaps entry for Rss\n");
+-			goto out;
+-		}
+-		break;
+-	}
+-
+-	ret = locked && (vma_rss < vma_size);
+ out:
+-	free(line);
+ 	if (smaps)
+ 		fclose(smaps);
+-	return ret;
++	free(line);
++	return value;
+ }
+ 
+-#define PRESENT_BIT     0x8000000000000000ULL
+-#define PFN_MASK        0x007FFFFFFFFFFFFFULL
+-#define UNEVICTABLE_BIT (1UL << 18)
+-
+-static int lock_check(char *map)
++static bool is_vma_lock_on_fault(unsigned long addr)
+ {
+-	unsigned long page_size = getpagesize();
+-	uint64_t page1_flags, page2_flags;
++	bool locked;
++	unsigned long vma_size, vma_rss;
+ 
+-	page1_flags = get_pageflags((unsigned long)map);
+-	page2_flags = get_pageflags((unsigned long)map + page_size);
++	locked = is_vmflag_set(addr, LOCKED);
++	if (!locked)
++		return false;
+ 
+-	/* Both pages should be present */
+-	if (((page1_flags & PRESENT_BIT) == 0) ||
+-	    ((page2_flags & PRESENT_BIT) == 0)) {
+-		printf("Failed to make both pages present\n");
+-		return 1;
+-	}
++	vma_size = get_value_for_name(addr, SIZE);
++	vma_rss = get_value_for_name(addr, RSS);
+ 
+-	page1_flags = get_kpageflags(page1_flags & PFN_MASK);
+-	page2_flags = get_kpageflags(page2_flags & PFN_MASK);
++	/* only one page is faulted in */
++	return (vma_rss < vma_size);
++}
+ 
+-	/* Both pages should be unevictable */
+-	if (((page1_flags & UNEVICTABLE_BIT) == 0) ||
+-	    ((page2_flags & UNEVICTABLE_BIT) == 0)) {
+-		printf("Failed to make both pages unevictable\n");
+-		return 1;
+-	}
++#define PRESENT_BIT     0x8000000000000000ULL
++#define PFN_MASK        0x007FFFFFFFFFFFFFULL
++#define UNEVICTABLE_BIT (1UL << 18)
+ 
+-	if (!is_vmflag_set((unsigned long)map, LOCKED)) {
+-		printf("VMA flag %s is missing on page 1\n", LOCKED);
+-		return 1;
+-	}
++static int lock_check(unsigned long addr)
++{
++	bool locked;
++	unsigned long vma_size, vma_rss;
+ 
+-	if (!is_vmflag_set((unsigned long)map + page_size, LOCKED)) {
+-		printf("VMA flag %s is missing on page 2\n", LOCKED);
+-		return 1;
+-	}
++	locked = is_vmflag_set(addr, LOCKED);
++	if (!locked)
++		return false;
+ 
+-	return 0;
++	vma_size = get_value_for_name(addr, SIZE);
++	vma_rss = get_value_for_name(addr, RSS);
++
++	return (vma_rss == vma_size);
+ }
+ 
+ static int unlock_lock_check(char *map)
+ {
+-	unsigned long page_size = getpagesize();
+-	uint64_t page1_flags, page2_flags;
+-
+-	page1_flags = get_pageflags((unsigned long)map);
+-	page2_flags = get_pageflags((unsigned long)map + page_size);
+-	page1_flags = get_kpageflags(page1_flags & PFN_MASK);
+-	page2_flags = get_kpageflags(page2_flags & PFN_MASK);
+-
+-	if ((page1_flags & UNEVICTABLE_BIT) || (page2_flags & UNEVICTABLE_BIT)) {
+-		printf("A page is still marked unevictable after unlock\n");
+-		return 1;
+-	}
+-
+ 	if (is_vmflag_set((unsigned long)map, LOCKED)) {
+ 		printf("VMA flag %s is present on page 1 after unlock\n", LOCKED);
+ 		return 1;
+ 	}
+ 
+-	if (is_vmflag_set((unsigned long)map + page_size, LOCKED)) {
+-		printf("VMA flag %s is present on page 2 after unlock\n", LOCKED);
+-		return 1;
+-	}
+-
+ 	return 0;
+ }
+ 
+@@ -311,7 +210,7 @@ static int test_mlock_lock()
+ 		goto unmap;
+ 	}
+ 
+-	if (lock_check(map))
++	if (!lock_check((unsigned long)map))
+ 		goto unmap;
+ 
+ 	/* Now unlock and recheck attributes */
+@@ -330,64 +229,18 @@ out:
+ 
+ static int onfault_check(char *map)
+ {
+-	unsigned long page_size = getpagesize();
+-	uint64_t page1_flags, page2_flags;
+-
+-	page1_flags = get_pageflags((unsigned long)map);
+-	page2_flags = get_pageflags((unsigned long)map + page_size);
+-
+-	/* Neither page should be present */
+-	if ((page1_flags & PRESENT_BIT) || (page2_flags & PRESENT_BIT)) {
+-		printf("Pages were made present by MLOCK_ONFAULT\n");
+-		return 1;
+-	}
+-
+ 	*map = 'a';
+-	page1_flags = get_pageflags((unsigned long)map);
+-	page2_flags = get_pageflags((unsigned long)map + page_size);
+-
+-	/* Only page 1 should be present */
+-	if ((page1_flags & PRESENT_BIT) == 0) {
+-		printf("Page 1 is not present after fault\n");
+-		return 1;
+-	} else if (page2_flags & PRESENT_BIT) {
+-		printf("Page 2 was made present\n");
+-		return 1;
+-	}
+-
+-	page1_flags = get_kpageflags(page1_flags & PFN_MASK);
+-
+-	/* Page 1 should be unevictable */
+-	if ((page1_flags & UNEVICTABLE_BIT) == 0) {
+-		printf("Failed to make faulted page unevictable\n");
+-		return 1;
+-	}
+-
+ 	if (!is_vma_lock_on_fault((unsigned long)map)) {
+ 		printf("VMA is not marked for lock on fault\n");
+ 		return 1;
+ 	}
+ 
+-	if (!is_vma_lock_on_fault((unsigned long)map + page_size)) {
+-		printf("VMA is not marked for lock on fault\n");
+-		return 1;
+-	}
+-
+ 	return 0;
+ }
+ 
+ static int unlock_onfault_check(char *map)
+ {
+ 	unsigned long page_size = getpagesize();
+-	uint64_t page1_flags;
+-
+-	page1_flags = get_pageflags((unsigned long)map);
+-	page1_flags = get_kpageflags(page1_flags & PFN_MASK);
+-
+-	if (page1_flags & UNEVICTABLE_BIT) {
+-		printf("Page 1 is still marked unevictable after unlock\n");
+-		return 1;
+-	}
+ 
+ 	if (is_vma_lock_on_fault((unsigned long)map) ||
+ 	    is_vma_lock_on_fault((unsigned long)map + page_size)) {
+@@ -445,7 +298,6 @@ static int test_lock_onfault_of_present()
+ 	char *map;
+ 	int ret = 1;
+ 	unsigned long page_size = getpagesize();
+-	uint64_t page1_flags, page2_flags;
+ 
+ 	map = mmap(NULL, 2 * page_size, PROT_READ | PROT_WRITE,
+ 		   MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
+@@ -465,17 +317,6 @@ static int test_lock_onfault_of_present()
+ 		goto unmap;
+ 	}
+ 
+-	page1_flags = get_pageflags((unsigned long)map);
+-	page2_flags = get_pageflags((unsigned long)map + page_size);
+-	page1_flags = get_kpageflags(page1_flags & PFN_MASK);
+-	page2_flags = get_kpageflags(page2_flags & PFN_MASK);
+-
+-	/* Page 1 should be unevictable */
+-	if ((page1_flags & UNEVICTABLE_BIT) == 0) {
+-		printf("Failed to make present page unevictable\n");
+-		goto unmap;
+-	}
+-
+ 	if (!is_vma_lock_on_fault((unsigned long)map) ||
+ 	    !is_vma_lock_on_fault((unsigned long)map + page_size)) {
+ 		printf("VMA with present pages is not marked lock on fault\n");
+@@ -507,7 +348,7 @@ static int test_munlockall()
+ 		goto out;
+ 	}
+ 
+-	if (lock_check(map))
++	if (!lock_check((unsigned long)map))
+ 		goto unmap;
+ 
+ 	if (munlockall()) {
+@@ -549,7 +390,7 @@ static int test_munlockall()
+ 		goto out;
+ 	}
+ 
+-	if (lock_check(map))
++	if (!lock_check((unsigned long)map))
+ 		goto unmap;
+ 
+ 	if (munlockall()) {
+diff --git a/tools/testing/selftests/x86/ptrace_syscall.c b/tools/testing/selftests/x86/ptrace_syscall.c
+index 6f22238f3217..12aaa063196e 100644
+--- a/tools/testing/selftests/x86/ptrace_syscall.c
++++ b/tools/testing/selftests/x86/ptrace_syscall.c
+@@ -414,8 +414,12 @@ int main()
+ 
+ #if defined(__i386__) && (!defined(__GLIBC__) || __GLIBC__ > 2 || __GLIBC_MINOR__ >= 16)
+ 	vsyscall32 = (void *)getauxval(AT_SYSINFO);
+-	printf("[RUN]\tCheck AT_SYSINFO return regs\n");
+-	test_sys32_regs(do_full_vsyscall32);
++	if (vsyscall32) {
++		printf("[RUN]\tCheck AT_SYSINFO return regs\n");
++		test_sys32_regs(do_full_vsyscall32);
++	} else {
++		printf("[SKIP]\tAT_SYSINFO is not available\n");
++	}
+ #endif
+ 
+ 	test_ptrace_syscall_restart();


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [gentoo-commits] proj/linux-patches:5.6 commit in: /
@ 2020-04-21 11:24 Mike Pagano
  0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2020-04-21 11:24 UTC (permalink / raw
  To: gentoo-commits

commit:     c8db58d5bb6e1e69f8939687f519e1feb5108839
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Tue Apr 21 11:24:35 2020 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Tue Apr 21 11:24:35 2020 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=c8db58d5

Linux patch 5.6.6

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README            |    4 +
 1005_linux-5.6.6.patch | 2516 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2520 insertions(+)

diff --git a/0000_README b/0000_README
index 7f000bc..073a921 100644
--- a/0000_README
+++ b/0000_README
@@ -63,6 +63,10 @@ Patch:  1004_linux-5.6.5.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.6.5
 
+Patch:  1005_linux-5.6.6.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.6.6
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1005_linux-5.6.6.patch b/1005_linux-5.6.6.patch
new file mode 100644
index 0000000..af76f9a
--- /dev/null
+++ b/1005_linux-5.6.6.patch
@@ -0,0 +1,2516 @@
+diff --git a/Makefile b/Makefile
+index 0d7098842d56..af76c00de7f6 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 6
+-SUBLEVEL = 5
++SUBLEVEL = 6
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+ 
+diff --git a/arch/arm/boot/dts/imx7-colibri.dtsi b/arch/arm/boot/dts/imx7-colibri.dtsi
+index 04717cf69db0..9bad960f2b39 100644
+--- a/arch/arm/boot/dts/imx7-colibri.dtsi
++++ b/arch/arm/boot/dts/imx7-colibri.dtsi
+@@ -345,7 +345,7 @@
+ &iomuxc {
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&pinctrl_gpio1 &pinctrl_gpio2 &pinctrl_gpio3 &pinctrl_gpio4
+-		     &pinctrl_gpio7>;
++		     &pinctrl_gpio7 &pinctrl_usbc_det>;
+ 
+ 	pinctrl_gpio1: gpio1-grp {
+ 		fsl,pins = <
+@@ -450,7 +450,6 @@
+ 
+ 	pinctrl_enet1: enet1grp {
+ 		fsl,pins = <
+-			MX7D_PAD_ENET1_CRS__GPIO7_IO14			0x14
+ 			MX7D_PAD_ENET1_RGMII_RX_CTL__ENET1_RGMII_RX_CTL	0x73
+ 			MX7D_PAD_ENET1_RGMII_RD0__ENET1_RGMII_RD0	0x73
+ 			MX7D_PAD_ENET1_RGMII_RD1__ENET1_RGMII_RD1	0x73
+@@ -648,6 +647,12 @@
+ 		>;
+ 	};
+ 
++	pinctrl_usbc_det: gpio-usbc-det {
++		fsl,pins = <
++			MX7D_PAD_ENET1_CRS__GPIO7_IO14	0x14
++		>;
++	};
++
+ 	pinctrl_usbh_reg: gpio-usbh-vbus {
+ 		fsl,pins = <
+ 			MX7D_PAD_UART3_CTS_B__GPIO4_IO7	0x14 /* SODIMM 129 USBH PEN */
+diff --git a/arch/arm64/boot/dts/freescale/imx8mq-librem5-devkit.dts b/arch/arm64/boot/dts/freescale/imx8mq-librem5-devkit.dts
+index 764a4cb4e125..161406445faf 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mq-librem5-devkit.dts
++++ b/arch/arm64/boot/dts/freescale/imx8mq-librem5-devkit.dts
+@@ -750,6 +750,7 @@
+ };
+ 
+ &usb3_phy0 {
++	vbus-supply = <&reg_5v_p>;
+ 	status = "okay";
+ };
+ 
+diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
+index 354b11e27c07..033a48f30dbb 100644
+--- a/arch/arm64/kernel/vdso.c
++++ b/arch/arm64/kernel/vdso.c
+@@ -260,18 +260,7 @@ static int __aarch32_alloc_vdso_pages(void)
+ 	if (ret)
+ 		return ret;
+ 
+-	ret = aarch32_alloc_kuser_vdso_page();
+-	if (ret) {
+-		unsigned long c_vvar =
+-			(unsigned long)page_to_virt(aarch32_vdso_pages[C_VVAR]);
+-		unsigned long c_vdso =
+-			(unsigned long)page_to_virt(aarch32_vdso_pages[C_VDSO]);
+-
+-		free_page(c_vvar);
+-		free_page(c_vdso);
+-	}
+-
+-	return ret;
++	return aarch32_alloc_kuser_vdso_page();
+ }
+ #else
+ static int __aarch32_alloc_vdso_pages(void)
+diff --git a/arch/x86/include/asm/microcode_amd.h b/arch/x86/include/asm/microcode_amd.h
+index 6685e1218959..7063b5a43220 100644
+--- a/arch/x86/include/asm/microcode_amd.h
++++ b/arch/x86/include/asm/microcode_amd.h
+@@ -41,7 +41,7 @@ struct microcode_amd {
+ 	unsigned int			mpb[0];
+ };
+ 
+-#define PATCH_MAX_SIZE PAGE_SIZE
++#define PATCH_MAX_SIZE (3 * PAGE_SIZE)
+ 
+ #ifdef CONFIG_MICROCODE_AMD
+ extern void __init load_ucode_amd_bsp(unsigned int family);
+diff --git a/arch/x86/kernel/cpu/resctrl/core.c b/arch/x86/kernel/cpu/resctrl/core.c
+index 89049b343c7a..d8cc5223b7ce 100644
+--- a/arch/x86/kernel/cpu/resctrl/core.c
++++ b/arch/x86/kernel/cpu/resctrl/core.c
+@@ -578,6 +578,8 @@ static void domain_add_cpu(int cpu, struct rdt_resource *r)
+ 	d->id = id;
+ 	cpumask_set_cpu(cpu, &d->cpu_mask);
+ 
++	rdt_domain_reconfigure_cdp(r);
++
+ 	if (r->alloc_capable && domain_setup_ctrlval(r, d)) {
+ 		kfree(d);
+ 		return;
+diff --git a/arch/x86/kernel/cpu/resctrl/internal.h b/arch/x86/kernel/cpu/resctrl/internal.h
+index 181c992f448c..3dd13f3a8b23 100644
+--- a/arch/x86/kernel/cpu/resctrl/internal.h
++++ b/arch/x86/kernel/cpu/resctrl/internal.h
+@@ -601,5 +601,6 @@ bool has_busy_rmid(struct rdt_resource *r, struct rdt_domain *d);
+ void __check_limbo(struct rdt_domain *d, bool force_free);
+ bool cbm_validate_intel(char *buf, u32 *data, struct rdt_resource *r);
+ bool cbm_validate_amd(char *buf, u32 *data, struct rdt_resource *r);
++void rdt_domain_reconfigure_cdp(struct rdt_resource *r);
+ 
+ #endif /* _ASM_X86_RESCTRL_INTERNAL_H */
+diff --git a/arch/x86/kernel/cpu/resctrl/rdtgroup.c b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+index 064e9ef44cd6..5a359d9fcc05 100644
+--- a/arch/x86/kernel/cpu/resctrl/rdtgroup.c
++++ b/arch/x86/kernel/cpu/resctrl/rdtgroup.c
+@@ -1859,6 +1859,19 @@ static int set_cache_qos_cfg(int level, bool enable)
+ 	return 0;
+ }
+ 
++/* Restore the qos cfg state when a domain comes online */
++void rdt_domain_reconfigure_cdp(struct rdt_resource *r)
++{
++	if (!r->alloc_capable)
++		return;
++
++	if (r == &rdt_resources_all[RDT_RESOURCE_L2DATA])
++		l2_qos_cfg_update(&r->alloc_enabled);
++
++	if (r == &rdt_resources_all[RDT_RESOURCE_L3DATA])
++		l3_qos_cfg_update(&r->alloc_enabled);
++}
++
+ /*
+  * Enable or disable the MBA software controller
+  * which helps user specify bandwidth in MBps.
+@@ -3072,7 +3085,8 @@ static int rdtgroup_rmdir(struct kernfs_node *kn)
+ 	 * If the rdtgroup is a mon group and parent directory
+ 	 * is a valid "mon_groups" directory, remove the mon group.
+ 	 */
+-	if (rdtgrp->type == RDTCTRL_GROUP && parent_kn == rdtgroup_default.kn) {
++	if (rdtgrp->type == RDTCTRL_GROUP && parent_kn == rdtgroup_default.kn &&
++	    rdtgrp != &rdtgroup_default) {
+ 		if (rdtgrp->mode == RDT_MODE_PSEUDO_LOCKSETUP ||
+ 		    rdtgrp->mode == RDT_MODE_PSEUDO_LOCKED) {
+ 			ret = rdtgroup_ctrl_remove(kn, rdtgrp);
+diff --git a/drivers/acpi/ec.c b/drivers/acpi/ec.c
+index 29b8fa618a02..35dd2f1fb0e6 100644
+--- a/drivers/acpi/ec.c
++++ b/drivers/acpi/ec.c
+@@ -1646,6 +1646,7 @@ static int acpi_ec_add(struct acpi_device *device)
+ 
+ 		if (boot_ec && ec->command_addr == boot_ec->command_addr &&
+ 		    ec->data_addr == boot_ec->data_addr) {
++			boot_ec_is_ecdt = false;
+ 			/*
+ 			 * Trust PNP0C09 namespace location rather than
+ 			 * ECDT ID. But trust ECDT GPE rather than _GPE
+@@ -1665,12 +1666,9 @@ static int acpi_ec_add(struct acpi_device *device)
+ 
+ 	if (ec == boot_ec)
+ 		acpi_handle_info(boot_ec->handle,
+-				 "Boot %s EC initialization complete\n",
++				 "Boot %s EC used to handle transactions and events\n",
+ 				 boot_ec_is_ecdt ? "ECDT" : "DSDT");
+ 
+-	acpi_handle_info(ec->handle,
+-			 "EC: Used to handle transactions and events\n");
+-
+ 	device->driver_data = ec;
+ 
+ 	ret = !!request_region(ec->data_addr, 1, "EC data");
+diff --git a/drivers/acpi/nfit/core.c b/drivers/acpi/nfit/core.c
+index a3320f93616d..d0090f71585c 100644
+--- a/drivers/acpi/nfit/core.c
++++ b/drivers/acpi/nfit/core.c
+@@ -360,7 +360,7 @@ static union acpi_object *acpi_label_info(acpi_handle handle)
+ 
+ static u8 nfit_dsm_revid(unsigned family, unsigned func)
+ {
+-	static const u8 revid_table[NVDIMM_FAMILY_MAX+1][32] = {
++	static const u8 revid_table[NVDIMM_FAMILY_MAX+1][NVDIMM_CMD_MAX+1] = {
+ 		[NVDIMM_FAMILY_INTEL] = {
+ 			[NVDIMM_INTEL_GET_MODES] = 2,
+ 			[NVDIMM_INTEL_GET_FWINFO] = 2,
+@@ -386,7 +386,7 @@ static u8 nfit_dsm_revid(unsigned family, unsigned func)
+ 
+ 	if (family > NVDIMM_FAMILY_MAX)
+ 		return 0;
+-	if (func > 31)
++	if (func > NVDIMM_CMD_MAX)
+ 		return 0;
+ 	id = revid_table[family][func];
+ 	if (id == 0)
+@@ -492,7 +492,8 @@ int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc, struct nvdimm *nvdimm,
+ 	 * Check for a valid command.  For ND_CMD_CALL, we also have to
+ 	 * make sure that the DSM function is supported.
+ 	 */
+-	if (cmd == ND_CMD_CALL && !test_bit(func, &dsm_mask))
++	if (cmd == ND_CMD_CALL &&
++	    (func > NVDIMM_CMD_MAX || !test_bit(func, &dsm_mask)))
+ 		return -ENOTTY;
+ 	else if (!test_bit(cmd, &cmd_mask))
+ 		return -ENOTTY;
+@@ -3492,7 +3493,8 @@ static int acpi_nfit_clear_to_send(struct nvdimm_bus_descriptor *nd_desc,
+ 	if (nvdimm && cmd == ND_CMD_CALL &&
+ 			call_pkg->nd_family == NVDIMM_FAMILY_INTEL) {
+ 		func = call_pkg->nd_command;
+-		if ((1 << func) & NVDIMM_INTEL_SECURITY_CMDMASK)
++		if (func > NVDIMM_CMD_MAX ||
++		    (1 << func) & NVDIMM_INTEL_SECURITY_CMDMASK)
+ 			return -EOPNOTSUPP;
+ 	}
+ 
+diff --git a/drivers/acpi/nfit/nfit.h b/drivers/acpi/nfit/nfit.h
+index 24241941181c..b317f4043705 100644
+--- a/drivers/acpi/nfit/nfit.h
++++ b/drivers/acpi/nfit/nfit.h
+@@ -34,6 +34,7 @@
+ 		| ACPI_NFIT_MEM_NOT_ARMED | ACPI_NFIT_MEM_MAP_FAILED)
+ 
+ #define NVDIMM_FAMILY_MAX NVDIMM_FAMILY_HYPERV
++#define NVDIMM_CMD_MAX 31
+ 
+ #define NVDIMM_STANDARD_CMDMASK \
+ (1 << ND_CMD_SMART | 1 << ND_CMD_SMART_THRESHOLD | 1 << ND_CMD_DIMM_FLAGS \
+diff --git a/drivers/clk/at91/clk-usb.c b/drivers/clk/at91/clk-usb.c
+index 22aede42a336..bda92980e015 100644
+--- a/drivers/clk/at91/clk-usb.c
++++ b/drivers/clk/at91/clk-usb.c
+@@ -211,7 +211,7 @@ _at91sam9x5_clk_register_usb(struct regmap *regmap, const char *name,
+ 
+ 	usb->hw.init = &init;
+ 	usb->regmap = regmap;
+-	usb->usbs_mask = SAM9X5_USBS_MASK;
++	usb->usbs_mask = usbs_mask;
+ 
+ 	hw = &usb->hw;
+ 	ret = clk_hw_register(NULL, &usb->hw);
+diff --git a/drivers/clk/at91/sam9x60.c b/drivers/clk/at91/sam9x60.c
+index 77398aefeb6d..7338a3bc71eb 100644
+--- a/drivers/clk/at91/sam9x60.c
++++ b/drivers/clk/at91/sam9x60.c
+@@ -237,9 +237,8 @@ static void __init sam9x60_pmc_setup(struct device_node *np)
+ 
+ 	parent_names[0] = "pllack";
+ 	parent_names[1] = "upllck";
+-	parent_names[2] = "mainck";
+-	parent_names[3] = "mainck";
+-	hw = sam9x60_clk_register_usb(regmap, "usbck", parent_names, 4);
++	parent_names[2] = "main_osc";
++	hw = sam9x60_clk_register_usb(regmap, "usbck", parent_names, 3);
+ 	if (IS_ERR(hw))
+ 		goto err_free;
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index 48e2863461b7..c8bf9cb3cebf 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -2285,6 +2285,8 @@ static int amdgpu_device_ip_suspend_phase1(struct amdgpu_device *adev)
+ {
+ 	int i, r;
+ 
++	amdgpu_device_set_pg_state(adev, AMD_PG_STATE_UNGATE);
++	amdgpu_device_set_cg_state(adev, AMD_CG_STATE_UNGATE);
+ 
+ 	for (i = adev->num_ip_blocks - 1; i >= 0; i--) {
+ 		if (!adev->ip_blocks[i].status.valid)
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+index 5d5bd34eb4a7..73337e658aff 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+@@ -1175,6 +1175,8 @@ struct amdgpu_gfxoff_quirk {
+ static const struct amdgpu_gfxoff_quirk amdgpu_gfxoff_quirk_list[] = {
+ 	/* https://bugzilla.kernel.org/show_bug.cgi?id=204689 */
+ 	{ 0x1002, 0x15dd, 0x1002, 0x15dd, 0xc8 },
++	/* https://bugzilla.kernel.org/show_bug.cgi?id=207171 */
++	{ 0x1002, 0x15dd, 0x103c, 0x83e7, 0xd3 },
+ 	{ 0, 0, 0, 0, 0 },
+ };
+ 
+diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
+index bf04cfefb283..662af0f2b0cc 100644
+--- a/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
++++ b/drivers/gpu/drm/amd/powerplay/hwmgr/smu7_hwmgr.c
+@@ -3804,9 +3804,12 @@ static int smu7_trim_single_dpm_states(struct pp_hwmgr *hwmgr,
+ {
+ 	uint32_t i;
+ 
++	/* force the trim if mclk_switching is disabled to prevent flicker */
++	bool force_trim = (low_limit == high_limit);
+ 	for (i = 0; i < dpm_table->count; i++) {
+ 	/*skip the trim if od is enabled*/
+-		if (!hwmgr->od_enabled && (dpm_table->dpm_levels[i].value < low_limit
++		if ((!hwmgr->od_enabled || force_trim)
++			&& (dpm_table->dpm_levels[i].value < low_limit
+ 			|| dpm_table->dpm_levels[i].value > high_limit))
+ 			dpm_table->dpm_levels[i].enabled = false;
+ 		else
+diff --git a/drivers/gpu/drm/i915/i915_perf.c b/drivers/gpu/drm/i915/i915_perf.c
+index 3b6b913bd27a..a649b6ca6b26 100644
+--- a/drivers/gpu/drm/i915/i915_perf.c
++++ b/drivers/gpu/drm/i915/i915_perf.c
+@@ -2909,49 +2909,6 @@ void i915_oa_init_reg_state(const struct intel_context *ce,
+ 		gen8_update_reg_state_unlocked(ce, stream);
+ }
+ 
+-/**
+- * i915_perf_read_locked - &i915_perf_stream_ops->read with error normalisation
+- * @stream: An i915 perf stream
+- * @file: An i915 perf stream file
+- * @buf: destination buffer given by userspace
+- * @count: the number of bytes userspace wants to read
+- * @ppos: (inout) file seek position (unused)
+- *
+- * Besides wrapping &i915_perf_stream_ops->read this provides a common place to
+- * ensure that if we've successfully copied any data then reporting that takes
+- * precedence over any internal error status, so the data isn't lost.
+- *
+- * For example ret will be -ENOSPC whenever there is more buffered data than
+- * can be copied to userspace, but that's only interesting if we weren't able
+- * to copy some data because it implies the userspace buffer is too small to
+- * receive a single record (and we never split records).
+- *
+- * Another case with ret == -EFAULT is more of a grey area since it would seem
+- * like bad form for userspace to ask us to overrun its buffer, but the user
+- * knows best:
+- *
+- *   http://yarchive.net/comp/linux/partial_reads_writes.html
+- *
+- * Returns: The number of bytes copied or a negative error code on failure.
+- */
+-static ssize_t i915_perf_read_locked(struct i915_perf_stream *stream,
+-				     struct file *file,
+-				     char __user *buf,
+-				     size_t count,
+-				     loff_t *ppos)
+-{
+-	/* Note we keep the offset (aka bytes read) separate from any
+-	 * error status so that the final check for whether we return
+-	 * the bytes read with a higher precedence than any error (see
+-	 * comment below) doesn't need to be handled/duplicated in
+-	 * stream->ops->read() implementations.
+-	 */
+-	size_t offset = 0;
+-	int ret = stream->ops->read(stream, buf, count, &offset);
+-
+-	return offset ?: (ret ?: -EAGAIN);
+-}
+-
+ /**
+  * i915_perf_read - handles read() FOP for i915 perf stream FDs
+  * @file: An i915 perf stream file
+@@ -2977,7 +2934,8 @@ static ssize_t i915_perf_read(struct file *file,
+ {
+ 	struct i915_perf_stream *stream = file->private_data;
+ 	struct i915_perf *perf = stream->perf;
+-	ssize_t ret;
++	size_t offset = 0;
++	int ret;
+ 
+ 	/* To ensure it's handled consistently we simply treat all reads of a
+ 	 * disabled stream as an error. In particular it might otherwise lead
+@@ -3000,13 +2958,12 @@ static ssize_t i915_perf_read(struct file *file,
+ 				return ret;
+ 
+ 			mutex_lock(&perf->lock);
+-			ret = i915_perf_read_locked(stream, file,
+-						    buf, count, ppos);
++			ret = stream->ops->read(stream, buf, count, &offset);
+ 			mutex_unlock(&perf->lock);
+-		} while (ret == -EAGAIN);
++		} while (!offset && !ret);
+ 	} else {
+ 		mutex_lock(&perf->lock);
+-		ret = i915_perf_read_locked(stream, file, buf, count, ppos);
++		ret = stream->ops->read(stream, buf, count, &offset);
+ 		mutex_unlock(&perf->lock);
+ 	}
+ 
+@@ -3017,15 +2974,15 @@ static ssize_t i915_perf_read(struct file *file,
+ 	 * and read() returning -EAGAIN. Clearing the oa.pollin state here
+ 	 * effectively ensures we back off until the next hrtimer callback
+ 	 * before reporting another EPOLLIN event.
++	 * The exception to this is if ops->read() returned -ENOSPC which means
++	 * that more OA data is available than could fit in the user provided
++	 * buffer. In this case we want the next poll() call to not block.
+ 	 */
+-	if (ret >= 0 || ret == -EAGAIN) {
+-		/* Maybe make ->pollin per-stream state if we support multiple
+-		 * concurrent streams in the future.
+-		 */
++	if (ret != -ENOSPC)
+ 		stream->pollin = false;
+-	}
+ 
+-	return ret;
++	/* Possible values for ret are 0, -EFAULT, -ENOSPC, -EIO, ... */
++	return offset ?: (ret ?: -EAGAIN);
+ }
+ 
+ static enum hrtimer_restart oa_poll_check_timer_cb(struct hrtimer *hrtimer)
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/sec2/gp108.c b/drivers/gpu/drm/nouveau/nvkm/engine/sec2/gp108.c
+index 232a9d7c51e5..e770c9497871 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/sec2/gp108.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/sec2/gp108.c
+@@ -25,6 +25,9 @@
+ MODULE_FIRMWARE("nvidia/gp108/sec2/desc.bin");
+ MODULE_FIRMWARE("nvidia/gp108/sec2/image.bin");
+ MODULE_FIRMWARE("nvidia/gp108/sec2/sig.bin");
++MODULE_FIRMWARE("nvidia/gv100/sec2/desc.bin");
++MODULE_FIRMWARE("nvidia/gv100/sec2/image.bin");
++MODULE_FIRMWARE("nvidia/gv100/sec2/sig.bin");
+ 
+ static const struct nvkm_sec2_fwif
+ gp108_sec2_fwif[] = {
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/sec2/tu102.c b/drivers/gpu/drm/nouveau/nvkm/engine/sec2/tu102.c
+index b6ebd95c9ba1..a8295653ceab 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/sec2/tu102.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/sec2/tu102.c
+@@ -56,6 +56,22 @@ tu102_sec2_nofw(struct nvkm_sec2 *sec2, int ver,
+ 	return 0;
+ }
+ 
++MODULE_FIRMWARE("nvidia/tu102/sec2/desc.bin");
++MODULE_FIRMWARE("nvidia/tu102/sec2/image.bin");
++MODULE_FIRMWARE("nvidia/tu102/sec2/sig.bin");
++MODULE_FIRMWARE("nvidia/tu104/sec2/desc.bin");
++MODULE_FIRMWARE("nvidia/tu104/sec2/image.bin");
++MODULE_FIRMWARE("nvidia/tu104/sec2/sig.bin");
++MODULE_FIRMWARE("nvidia/tu106/sec2/desc.bin");
++MODULE_FIRMWARE("nvidia/tu106/sec2/image.bin");
++MODULE_FIRMWARE("nvidia/tu106/sec2/sig.bin");
++MODULE_FIRMWARE("nvidia/tu116/sec2/desc.bin");
++MODULE_FIRMWARE("nvidia/tu116/sec2/image.bin");
++MODULE_FIRMWARE("nvidia/tu116/sec2/sig.bin");
++MODULE_FIRMWARE("nvidia/tu117/sec2/desc.bin");
++MODULE_FIRMWARE("nvidia/tu117/sec2/image.bin");
++MODULE_FIRMWARE("nvidia/tu117/sec2/sig.bin");
++
+ static const struct nvkm_sec2_fwif
+ tu102_sec2_fwif[] = {
+ 	{  0, gp102_sec2_load, &tu102_sec2, &gp102_sec2_acr_1 },
+diff --git a/drivers/hid/hid-lg-g15.c b/drivers/hid/hid-lg-g15.c
+index 8a9268a5c66a..ad4b5412a9f4 100644
+--- a/drivers/hid/hid-lg-g15.c
++++ b/drivers/hid/hid-lg-g15.c
+@@ -803,8 +803,10 @@ static int lg_g15_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ 	}
+ 
+ 	if (ret < 0) {
+-		hid_err(hdev, "Error disabling keyboard emulation for the G-keys\n");
+-		goto error_hw_stop;
++		hid_err(hdev, "Error %d disabling keyboard emulation for the G-keys, falling back to generic hid-input driver\n",
++			ret);
++		hid_set_drvdata(hdev, NULL);
++		return 0;
+ 	}
+ 
+ 	/* Get initial brightness levels */
+diff --git a/drivers/i2c/busses/i2c-designware-platdrv.c b/drivers/i2c/busses/i2c-designware-platdrv.c
+index 3b7d58c2fe85..15b4b965b443 100644
+--- a/drivers/i2c/busses/i2c-designware-platdrv.c
++++ b/drivers/i2c/busses/i2c-designware-platdrv.c
+@@ -371,10 +371,16 @@ static int dw_i2c_plat_probe(struct platform_device *pdev)
+ 	adap->dev.of_node = pdev->dev.of_node;
+ 	adap->nr = -1;
+ 
+-	dev_pm_set_driver_flags(&pdev->dev,
+-				DPM_FLAG_SMART_PREPARE |
+-				DPM_FLAG_SMART_SUSPEND |
+-				DPM_FLAG_LEAVE_SUSPENDED);
++	if (dev->flags & ACCESS_NO_IRQ_SUSPEND) {
++		dev_pm_set_driver_flags(&pdev->dev,
++					DPM_FLAG_SMART_PREPARE |
++					DPM_FLAG_LEAVE_SUSPENDED);
++	} else {
++		dev_pm_set_driver_flags(&pdev->dev,
++					DPM_FLAG_SMART_PREPARE |
++					DPM_FLAG_SMART_SUSPEND |
++					DPM_FLAG_LEAVE_SUSPENDED);
++	}
+ 
+ 	/* The code below assumes runtime PM to be disabled. */
+ 	WARN_ON(pm_runtime_enabled(&pdev->dev));
+diff --git a/drivers/irqchip/irq-ti-sci-inta.c b/drivers/irqchip/irq-ti-sci-inta.c
+index 8f6e6b08eadf..7e3ebf6ed2cd 100644
+--- a/drivers/irqchip/irq-ti-sci-inta.c
++++ b/drivers/irqchip/irq-ti-sci-inta.c
+@@ -37,6 +37,7 @@
+ #define VINT_ENABLE_SET_OFFSET	0x0
+ #define VINT_ENABLE_CLR_OFFSET	0x8
+ #define VINT_STATUS_OFFSET	0x18
++#define VINT_STATUS_MASKED_OFFSET	0x20
+ 
+ /**
+  * struct ti_sci_inta_event_desc - Description of an event coming to
+@@ -116,7 +117,7 @@ static void ti_sci_inta_irq_handler(struct irq_desc *desc)
+ 	chained_irq_enter(irq_desc_get_chip(desc), desc);
+ 
+ 	val = readq_relaxed(inta->base + vint_desc->vint_id * 0x1000 +
+-			    VINT_STATUS_OFFSET);
++			    VINT_STATUS_MASKED_OFFSET);
+ 
+ 	for_each_set_bit(bit, &val, MAX_EVENTS_PER_VINT) {
+ 		virq = irq_find_mapping(domain, vint_desc->events[bit].hwirq);
+diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c
+index 9b0de2852c69..0123498242b9 100644
+--- a/drivers/net/dsa/mt7530.c
++++ b/drivers/net/dsa/mt7530.c
+@@ -66,58 +66,6 @@ static const struct mt7530_mib_desc mt7530_mib[] = {
+ 	MIB_DESC(1, 0xb8, "RxArlDrop"),
+ };
+ 
+-static int
+-mt7623_trgmii_write(struct mt7530_priv *priv,  u32 reg, u32 val)
+-{
+-	int ret;
+-
+-	ret =  regmap_write(priv->ethernet, TRGMII_BASE(reg), val);
+-	if (ret < 0)
+-		dev_err(priv->dev,
+-			"failed to priv write register\n");
+-	return ret;
+-}
+-
+-static u32
+-mt7623_trgmii_read(struct mt7530_priv *priv, u32 reg)
+-{
+-	int ret;
+-	u32 val;
+-
+-	ret = regmap_read(priv->ethernet, TRGMII_BASE(reg), &val);
+-	if (ret < 0) {
+-		dev_err(priv->dev,
+-			"failed to priv read register\n");
+-		return ret;
+-	}
+-
+-	return val;
+-}
+-
+-static void
+-mt7623_trgmii_rmw(struct mt7530_priv *priv, u32 reg,
+-		  u32 mask, u32 set)
+-{
+-	u32 val;
+-
+-	val = mt7623_trgmii_read(priv, reg);
+-	val &= ~mask;
+-	val |= set;
+-	mt7623_trgmii_write(priv, reg, val);
+-}
+-
+-static void
+-mt7623_trgmii_set(struct mt7530_priv *priv, u32 reg, u32 val)
+-{
+-	mt7623_trgmii_rmw(priv, reg, 0, val);
+-}
+-
+-static void
+-mt7623_trgmii_clear(struct mt7530_priv *priv, u32 reg, u32 val)
+-{
+-	mt7623_trgmii_rmw(priv, reg, val, 0);
+-}
+-
+ static int
+ core_read_mmd_indirect(struct mt7530_priv *priv, int prtad, int devad)
+ {
+@@ -530,27 +478,6 @@ mt7530_pad_clk_setup(struct dsa_switch *ds, int mode)
+ 		for (i = 0 ; i < NUM_TRGMII_CTRL; i++)
+ 			mt7530_rmw(priv, MT7530_TRGMII_RD(i),
+ 				   RD_TAP_MASK, RD_TAP(16));
+-	else
+-		if (priv->id != ID_MT7621)
+-			mt7623_trgmii_set(priv, GSW_INTF_MODE,
+-					  INTF_MODE_TRGMII);
+-
+-	return 0;
+-}
+-
+-static int
+-mt7623_pad_clk_setup(struct dsa_switch *ds)
+-{
+-	struct mt7530_priv *priv = ds->priv;
+-	int i;
+-
+-	for (i = 0 ; i < NUM_TRGMII_CTRL; i++)
+-		mt7623_trgmii_write(priv, GSW_TRGMII_TD_ODT(i),
+-				    TD_DM_DRVP(8) | TD_DM_DRVN(8));
+-
+-	mt7623_trgmii_set(priv, GSW_TRGMII_RCK_CTRL, RX_RST | RXC_DQSISEL);
+-	mt7623_trgmii_clear(priv, GSW_TRGMII_RCK_CTRL, RX_RST);
+-
+ 	return 0;
+ }
+ 
+@@ -857,8 +784,9 @@ mt7530_port_set_vlan_unaware(struct dsa_switch *ds, int port)
+ 	 */
+ 	mt7530_rmw(priv, MT7530_PCR_P(port), PCR_PORT_VLAN_MASK,
+ 		   MT7530_PORT_MATRIX_MODE);
+-	mt7530_rmw(priv, MT7530_PVC_P(port), VLAN_ATTR_MASK,
+-		   VLAN_ATTR(MT7530_VLAN_TRANSPARENT));
++	mt7530_rmw(priv, MT7530_PVC_P(port), VLAN_ATTR_MASK | PVC_EG_TAG_MASK,
++		   VLAN_ATTR(MT7530_VLAN_TRANSPARENT) |
++		   PVC_EG_TAG(MT7530_VLAN_EG_CONSISTENT));
+ 
+ 	for (i = 0; i < MT7530_NUM_PORTS; i++) {
+ 		if (dsa_is_user_port(ds, i) &&
+@@ -874,8 +802,8 @@ mt7530_port_set_vlan_unaware(struct dsa_switch *ds, int port)
+ 	if (all_user_ports_removed) {
+ 		mt7530_write(priv, MT7530_PCR_P(MT7530_CPU_PORT),
+ 			     PCR_MATRIX(dsa_user_ports(priv->ds)));
+-		mt7530_write(priv, MT7530_PVC_P(MT7530_CPU_PORT),
+-			     PORT_SPEC_TAG);
++		mt7530_write(priv, MT7530_PVC_P(MT7530_CPU_PORT), PORT_SPEC_TAG
++			     | PVC_EG_TAG(MT7530_VLAN_EG_CONSISTENT));
+ 	}
+ }
+ 
+@@ -901,8 +829,9 @@ mt7530_port_set_vlan_aware(struct dsa_switch *ds, int port)
+ 	/* Set the port as a user port which is to be able to recognize VID
+ 	 * from incoming packets before fetching entry within the VLAN table.
+ 	 */
+-	mt7530_rmw(priv, MT7530_PVC_P(port), VLAN_ATTR_MASK,
+-		   VLAN_ATTR(MT7530_VLAN_USER));
++	mt7530_rmw(priv, MT7530_PVC_P(port), VLAN_ATTR_MASK | PVC_EG_TAG_MASK,
++		   VLAN_ATTR(MT7530_VLAN_USER) |
++		   PVC_EG_TAG(MT7530_VLAN_EG_DISABLED));
+ }
+ 
+ static void
+@@ -1256,10 +1185,6 @@ mt7530_setup(struct dsa_switch *ds)
+ 	dn = dsa_to_port(ds, MT7530_CPU_PORT)->master->dev.of_node->parent;
+ 
+ 	if (priv->id == ID_MT7530) {
+-		priv->ethernet = syscon_node_to_regmap(dn);
+-		if (IS_ERR(priv->ethernet))
+-			return PTR_ERR(priv->ethernet);
+-
+ 		regulator_set_voltage(priv->core_pwr, 1000000, 1000000);
+ 		ret = regulator_enable(priv->core_pwr);
+ 		if (ret < 0) {
+@@ -1333,6 +1258,10 @@ mt7530_setup(struct dsa_switch *ds)
+ 			mt7530_cpu_port_enable(priv, i);
+ 		else
+ 			mt7530_port_disable(ds, i);
++
++		/* Enable consistent egress tag */
++		mt7530_rmw(priv, MT7530_PVC_P(i), PVC_EG_TAG_MASK,
++			   PVC_EG_TAG(MT7530_VLAN_EG_CONSISTENT));
+ 	}
+ 
+ 	/* Setup port 5 */
+@@ -1421,14 +1350,6 @@ static void mt7530_phylink_mac_config(struct dsa_switch *ds, int port,
+ 		/* Setup TX circuit incluing relevant PAD and driving */
+ 		mt7530_pad_clk_setup(ds, state->interface);
+ 
+-		if (priv->id == ID_MT7530) {
+-			/* Setup RX circuit, relevant PAD and driving on the
+-			 * host which must be placed after the setup on the
+-			 * device side is all finished.
+-			 */
+-			mt7623_pad_clk_setup(ds);
+-		}
+-
+ 		priv->p6_interface = state->interface;
+ 		break;
+ 	default:
+diff --git a/drivers/net/dsa/mt7530.h b/drivers/net/dsa/mt7530.h
+index ccb9da8cad0d..756140b7dfd5 100644
+--- a/drivers/net/dsa/mt7530.h
++++ b/drivers/net/dsa/mt7530.h
+@@ -167,9 +167,16 @@ enum mt7530_port_mode {
+ /* Register for port vlan control */
+ #define MT7530_PVC_P(x)			(0x2010 + ((x) * 0x100))
+ #define  PORT_SPEC_TAG			BIT(5)
++#define  PVC_EG_TAG(x)			(((x) & 0x7) << 8)
++#define  PVC_EG_TAG_MASK		PVC_EG_TAG(7)
+ #define  VLAN_ATTR(x)			(((x) & 0x3) << 6)
+ #define  VLAN_ATTR_MASK			VLAN_ATTR(3)
+ 
++enum mt7530_vlan_port_eg_tag {
++	MT7530_VLAN_EG_DISABLED = 0,
++	MT7530_VLAN_EG_CONSISTENT = 1,
++};
++
+ enum mt7530_vlan_port_attr {
+ 	MT7530_VLAN_USER = 0,
+ 	MT7530_VLAN_TRANSPARENT = 3,
+@@ -268,7 +275,6 @@ enum mt7530_vlan_port_attr {
+ 
+ /* Registers for TRGMII on the both side */
+ #define MT7530_TRGMII_RCK_CTRL		0x7a00
+-#define GSW_TRGMII_RCK_CTRL		0x300
+ #define  RX_RST				BIT(31)
+ #define  RXC_DQSISEL			BIT(30)
+ #define  DQSI1_TAP_MASK			(0x7f << 8)
+@@ -277,31 +283,24 @@ enum mt7530_vlan_port_attr {
+ #define  DQSI0_TAP(x)			((x) & 0x7f)
+ 
+ #define MT7530_TRGMII_RCK_RTT		0x7a04
+-#define GSW_TRGMII_RCK_RTT		0x304
+ #define  DQS1_GATE			BIT(31)
+ #define  DQS0_GATE			BIT(30)
+ 
+ #define MT7530_TRGMII_RD(x)		(0x7a10 + (x) * 8)
+-#define GSW_TRGMII_RD(x)		(0x310 + (x) * 8)
+ #define  BSLIP_EN			BIT(31)
+ #define  EDGE_CHK			BIT(30)
+ #define  RD_TAP_MASK			0x7f
+ #define  RD_TAP(x)			((x) & 0x7f)
+ 
+-#define GSW_TRGMII_TXCTRL		0x340
+ #define MT7530_TRGMII_TXCTRL		0x7a40
+ #define  TRAIN_TXEN			BIT(31)
+ #define  TXC_INV			BIT(30)
+ #define  TX_RST				BIT(28)
+ 
+ #define MT7530_TRGMII_TD_ODT(i)		(0x7a54 + 8 * (i))
+-#define GSW_TRGMII_TD_ODT(i)		(0x354 + 8 * (i))
+ #define  TD_DM_DRVP(x)			((x) & 0xf)
+ #define  TD_DM_DRVN(x)			(((x) & 0xf) << 4)
+ 
+-#define GSW_INTF_MODE			0x390
+-#define  INTF_MODE_TRGMII		BIT(1)
+-
+ #define MT7530_TRGMII_TCK_CTRL		0x7a78
+ #define  TCK_TAP(x)			(((x) & 0xf) << 8)
+ 
+@@ -434,7 +433,6 @@ static const char *p5_intf_modes(unsigned int p5_interface)
+  * @ds:			The pointer to the dsa core structure
+  * @bus:		The bus used for the device and built-in PHY
+  * @rstc:		The pointer to reset control used by MCM
+- * @ethernet:		The regmap used for access TRGMII-based registers
+  * @core_pwr:		The power supplied into the core
+  * @io_pwr:		The power supplied into the I/O
+  * @reset:		The descriptor for GPIO line tied to its reset pin
+@@ -451,7 +449,6 @@ struct mt7530_priv {
+ 	struct dsa_switch	*ds;
+ 	struct mii_bus		*bus;
+ 	struct reset_control	*rstc;
+-	struct regmap		*ethernet;
+ 	struct regulator	*core_pwr;
+ 	struct regulator	*io_pwr;
+ 	struct gpio_desc	*reset;
+diff --git a/drivers/net/dsa/ocelot/felix.c b/drivers/net/dsa/ocelot/felix.c
+index 3257962c147e..9e895ab586d5 100644
+--- a/drivers/net/dsa/ocelot/felix.c
++++ b/drivers/net/dsa/ocelot/felix.c
+@@ -44,11 +44,8 @@ static int felix_fdb_add(struct dsa_switch *ds, int port,
+ 			 const unsigned char *addr, u16 vid)
+ {
+ 	struct ocelot *ocelot = ds->priv;
+-	bool vlan_aware;
+ 
+-	vlan_aware = dsa_port_is_vlan_filtering(dsa_to_port(ds, port));
+-
+-	return ocelot_fdb_add(ocelot, port, addr, vid, vlan_aware);
++	return ocelot_fdb_add(ocelot, port, addr, vid);
+ }
+ 
+ static int felix_fdb_del(struct dsa_switch *ds, int port,
+diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-drv.c b/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
+index b71f9b04a51e..a87264f95f1a 100644
+--- a/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
++++ b/drivers/net/ethernet/amd/xgbe/xgbe-drv.c
+@@ -514,7 +514,7 @@ static void xgbe_isr_task(unsigned long data)
+ 				xgbe_disable_rx_tx_ints(pdata);
+ 
+ 				/* Turn on polling */
+-				__napi_schedule_irqoff(&pdata->napi);
++				__napi_schedule(&pdata->napi);
+ 			}
+ 		} else {
+ 			/* Don't clear Rx/Tx status if doing per channel DMA
+diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+index 8c6cfd15481c..b5408c5b954a 100644
+--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c
++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+@@ -65,6 +65,17 @@ u32 mtk_r32(struct mtk_eth *eth, unsigned reg)
+ 	return __raw_readl(eth->base + reg);
+ }
+ 
++u32 mtk_m32(struct mtk_eth *eth, u32 mask, u32 set, unsigned reg)
++{
++	u32 val;
++
++	val = mtk_r32(eth, reg);
++	val &= ~mask;
++	val |= set;
++	mtk_w32(eth, val, reg);
++	return reg;
++}
++
+ static int mtk_mdio_busy_wait(struct mtk_eth *eth)
+ {
+ 	unsigned long t_start = jiffies;
+@@ -193,7 +204,7 @@ static void mtk_mac_config(struct phylink_config *config, unsigned int mode,
+ 	struct mtk_mac *mac = container_of(config, struct mtk_mac,
+ 					   phylink_config);
+ 	struct mtk_eth *eth = mac->hw;
+-	u32 mcr_cur, mcr_new, sid;
++	u32 mcr_cur, mcr_new, sid, i;
+ 	int val, ge_mode, err;
+ 
+ 	/* MT76x8 has no hardware settings between for the MAC */
+@@ -255,6 +266,17 @@ static void mtk_mac_config(struct phylink_config *config, unsigned int mode,
+ 				    PHY_INTERFACE_MODE_TRGMII)
+ 					mtk_gmac0_rgmii_adjust(mac->hw,
+ 							       state->speed);
++
++				/* mt7623_pad_clk_setup */
++				for (i = 0 ; i < NUM_TRGMII_CTRL; i++)
++					mtk_w32(mac->hw,
++						TD_DM_DRVP(8) | TD_DM_DRVN(8),
++						TRGMII_TD_ODT(i));
++
++				/* Assert/release MT7623 RXC reset */
++				mtk_m32(mac->hw, 0, RXC_RST | RXC_DQSISEL,
++					TRGMII_RCK_CTRL);
++				mtk_m32(mac->hw, RXC_RST, 0, TRGMII_RCK_CTRL);
+ 			}
+ 		}
+ 
+diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.h b/drivers/net/ethernet/mediatek/mtk_eth_soc.h
+index 85830fe14a1b..454cfcd465fd 100644
+--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.h
++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.h
+@@ -352,10 +352,13 @@
+ #define DQSI0(x)		((x << 0) & GENMASK(6, 0))
+ #define DQSI1(x)		((x << 8) & GENMASK(14, 8))
+ #define RXCTL_DMWTLAT(x)	((x << 16) & GENMASK(18, 16))
++#define RXC_RST			BIT(31)
+ #define RXC_DQSISEL		BIT(30)
+ #define RCK_CTRL_RGMII_1000	(RXC_DQSISEL | RXCTL_DMWTLAT(2) | DQSI1(16))
+ #define RCK_CTRL_RGMII_10_100	RXCTL_DMWTLAT(2)
+ 
++#define NUM_TRGMII_CTRL		5
++
+ /* TRGMII RXC control register */
+ #define TRGMII_TCK_CTRL		0x10340
+ #define TXCTL_DMWTLAT(x)	((x << 16) & GENMASK(18, 16))
+@@ -363,6 +366,11 @@
+ #define TCK_CTRL_RGMII_1000	TXCTL_DMWTLAT(2)
+ #define TCK_CTRL_RGMII_10_100	(TXC_INV | TXCTL_DMWTLAT(2))
+ 
++/* TRGMII TX Drive Strength */
++#define TRGMII_TD_ODT(i)	(0x10354 + 8 * (i))
++#define  TD_DM_DRVP(x)		((x) & 0xf)
++#define  TD_DM_DRVN(x)		(((x) & 0xf) << 4)
++
+ /* TRGMII Interface mode register */
+ #define INTF_MODE		0x10390
+ #define TRGMII_INTF_DIS		BIT(0)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/devlink.c b/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
+index ac108f1e5bd6..184c3eaefbcb 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
+@@ -23,7 +23,10 @@ static int mlx5_devlink_flash_update(struct devlink *devlink,
+ 	if (err)
+ 		return err;
+ 
+-	return mlx5_firmware_flash(dev, fw, extack);
++	err = mlx5_firmware_flash(dev, fw, extack);
++	release_firmware(fw);
++
++	return err;
+ }
+ 
+ static u8 mlx5_fw_ver_major(u32 version)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
+index c9606b8ab6ef..ddd2409fc8be 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
+@@ -1036,14 +1036,15 @@ int mlx5e_open_channels(struct mlx5e_priv *priv,
+ 			struct mlx5e_channels *chs);
+ void mlx5e_close_channels(struct mlx5e_channels *chs);
+ 
+-/* Function pointer to be used to modify WH settings while
++/* Function pointer to be used to modify HW or kernel settings while
+  * switching channels
+  */
+-typedef int (*mlx5e_fp_hw_modify)(struct mlx5e_priv *priv);
++typedef int (*mlx5e_fp_preactivate)(struct mlx5e_priv *priv);
+ int mlx5e_safe_reopen_channels(struct mlx5e_priv *priv);
+ int mlx5e_safe_switch_channels(struct mlx5e_priv *priv,
+ 			       struct mlx5e_channels *new_chs,
+-			       mlx5e_fp_hw_modify hw_modify);
++			       mlx5e_fp_preactivate preactivate);
++int mlx5e_num_channels_changed(struct mlx5e_priv *priv);
+ void mlx5e_activate_priv_channels(struct mlx5e_priv *priv);
+ void mlx5e_deactivate_priv_channels(struct mlx5e_priv *priv);
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+index d674cb679895..d2cfa247abc8 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+@@ -432,9 +432,7 @@ int mlx5e_ethtool_set_channels(struct mlx5e_priv *priv,
+ 
+ 	if (!test_bit(MLX5E_STATE_OPENED, &priv->state)) {
+ 		*cur_params = new_channels.params;
+-		if (!netif_is_rxfh_configured(priv->netdev))
+-			mlx5e_build_default_indir_rqt(priv->rss_params.indirection_rqt,
+-						      MLX5E_INDIR_RQT_SIZE, count);
++		mlx5e_num_channels_changed(priv);
+ 		goto out;
+ 	}
+ 
+@@ -442,12 +440,8 @@ int mlx5e_ethtool_set_channels(struct mlx5e_priv *priv,
+ 	if (arfs_enabled)
+ 		mlx5e_arfs_disable(priv);
+ 
+-	if (!netif_is_rxfh_configured(priv->netdev))
+-		mlx5e_build_default_indir_rqt(priv->rss_params.indirection_rqt,
+-					      MLX5E_INDIR_RQT_SIZE, count);
+-
+ 	/* Switch to new channels, set new parameters and close old ones */
+-	err = mlx5e_safe_switch_channels(priv, &new_channels, NULL);
++	err = mlx5e_safe_switch_channels(priv, &new_channels, mlx5e_num_channels_changed);
+ 
+ 	if (arfs_enabled) {
+ 		int err2 = mlx5e_arfs_enable(priv);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index 4ef3dc79f73c..265073996432 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -2886,6 +2886,28 @@ static void mlx5e_netdev_set_tcs(struct net_device *netdev)
+ 		netdev_set_tc_queue(netdev, tc, nch, 0);
+ }
+ 
++static void mlx5e_update_netdev_queues(struct mlx5e_priv *priv)
++{
++	int num_txqs = priv->channels.num * priv->channels.params.num_tc;
++	int num_rxqs = priv->channels.num * priv->profile->rq_groups;
++	struct net_device *netdev = priv->netdev;
++
++	mlx5e_netdev_set_tcs(netdev);
++	netif_set_real_num_tx_queues(netdev, num_txqs);
++	netif_set_real_num_rx_queues(netdev, num_rxqs);
++}
++
++int mlx5e_num_channels_changed(struct mlx5e_priv *priv)
++{
++	u16 count = priv->channels.params.num_channels;
++
++	if (!netif_is_rxfh_configured(priv->netdev))
++		mlx5e_build_default_indir_rqt(priv->rss_params.indirection_rqt,
++					      MLX5E_INDIR_RQT_SIZE, count);
++
++	return 0;
++}
++
+ static void mlx5e_build_txq_maps(struct mlx5e_priv *priv)
+ {
+ 	int i, ch;
+@@ -2907,13 +2929,7 @@ static void mlx5e_build_txq_maps(struct mlx5e_priv *priv)
+ 
+ void mlx5e_activate_priv_channels(struct mlx5e_priv *priv)
+ {
+-	int num_txqs = priv->channels.num * priv->channels.params.num_tc;
+-	int num_rxqs = priv->channels.num * priv->profile->rq_groups;
+-	struct net_device *netdev = priv->netdev;
+-
+-	mlx5e_netdev_set_tcs(netdev);
+-	netif_set_real_num_tx_queues(netdev, num_txqs);
+-	netif_set_real_num_rx_queues(netdev, num_rxqs);
++	mlx5e_update_netdev_queues(priv);
+ 
+ 	mlx5e_build_txq_maps(priv);
+ 	mlx5e_activate_channels(&priv->channels);
+@@ -2949,7 +2965,7 @@ void mlx5e_deactivate_priv_channels(struct mlx5e_priv *priv)
+ 
+ static void mlx5e_switch_priv_channels(struct mlx5e_priv *priv,
+ 				       struct mlx5e_channels *new_chs,
+-				       mlx5e_fp_hw_modify hw_modify)
++				       mlx5e_fp_preactivate preactivate)
+ {
+ 	struct net_device *netdev = priv->netdev;
+ 	int new_num_txqs;
+@@ -2968,9 +2984,11 @@ static void mlx5e_switch_priv_channels(struct mlx5e_priv *priv,
+ 
+ 	priv->channels = *new_chs;
+ 
+-	/* New channels are ready to roll, modify HW settings if needed */
+-	if (hw_modify)
+-		hw_modify(priv);
++	/* New channels are ready to roll, call the preactivate hook if needed
++	 * to modify HW settings or update kernel parameters.
++	 */
++	if (preactivate)
++		preactivate(priv);
+ 
+ 	priv->profile->update_rx(priv);
+ 	mlx5e_activate_priv_channels(priv);
+@@ -2982,7 +3000,7 @@ static void mlx5e_switch_priv_channels(struct mlx5e_priv *priv,
+ 
+ int mlx5e_safe_switch_channels(struct mlx5e_priv *priv,
+ 			       struct mlx5e_channels *new_chs,
+-			       mlx5e_fp_hw_modify hw_modify)
++			       mlx5e_fp_preactivate preactivate)
+ {
+ 	int err;
+ 
+@@ -2990,7 +3008,7 @@ int mlx5e_safe_switch_channels(struct mlx5e_priv *priv,
+ 	if (err)
+ 		return err;
+ 
+-	mlx5e_switch_priv_channels(priv, new_chs, hw_modify);
++	mlx5e_switch_priv_channels(priv, new_chs, preactivate);
+ 	return 0;
+ }
+ 
+@@ -5298,9 +5316,10 @@ int mlx5e_attach_netdev(struct mlx5e_priv *priv)
+ 	max_nch = mlx5e_get_max_num_channels(priv->mdev);
+ 	if (priv->channels.params.num_channels > max_nch) {
+ 		mlx5_core_warn(priv->mdev, "MLX5E: Reducing number of channels to %d\n", max_nch);
++		/* Reducing the number of channels - RXFH has to be reset. */
++		priv->netdev->priv_flags &= ~IFF_RXFH_CONFIGURED;
+ 		priv->channels.params.num_channels = max_nch;
+-		mlx5e_build_default_indir_rqt(priv->rss_params.indirection_rqt,
+-					      MLX5E_INDIR_RQT_SIZE, max_nch);
++		mlx5e_num_channels_changed(priv);
+ 	}
+ 
+ 	err = profile->init_tx(priv);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+index 6ed307d7f191..ffc193c4ad43 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+@@ -1969,29 +1969,30 @@ static int register_devlink_port(struct mlx5_core_dev *dev,
+ 	struct mlx5_eswitch_rep *rep = rpriv->rep;
+ 	struct netdev_phys_item_id ppid = {};
+ 	unsigned int dl_port_index = 0;
++	u16 pfnum;
+ 
+ 	if (!is_devlink_port_supported(dev, rpriv))
+ 		return 0;
+ 
+ 	mlx5e_rep_get_port_parent_id(rpriv->netdev, &ppid);
++	pfnum = PCI_FUNC(dev->pdev->devfn);
+ 
+ 	if (rep->vport == MLX5_VPORT_UPLINK) {
+ 		devlink_port_attrs_set(&rpriv->dl_port,
+ 				       DEVLINK_PORT_FLAVOUR_PHYSICAL,
+-				       PCI_FUNC(dev->pdev->devfn), false, 0,
++				       pfnum, false, 0,
+ 				       &ppid.id[0], ppid.id_len);
+ 		dl_port_index = vport_to_devlink_port_index(dev, rep->vport);
+ 	} else if (rep->vport == MLX5_VPORT_PF) {
+ 		devlink_port_attrs_pci_pf_set(&rpriv->dl_port,
+ 					      &ppid.id[0], ppid.id_len,
+-					      dev->pdev->devfn);
++					      pfnum);
+ 		dl_port_index = rep->vport;
+ 	} else if (mlx5_eswitch_is_vf_vport(dev->priv.eswitch,
+ 					    rpriv->rep->vport)) {
+ 		devlink_port_attrs_pci_vf_set(&rpriv->dl_port,
+ 					      &ppid.id[0], ppid.id_len,
+-					      dev->pdev->devfn,
+-					      rep->vport - 1);
++					      pfnum, rep->vport - 1);
+ 		dl_port_index = vport_to_devlink_port_index(dev, rep->vport);
+ 	}
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+index ec5fc52bf572..4659c205cc01 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+@@ -3269,12 +3269,13 @@ static int add_vlan_pop_action(struct mlx5e_priv *priv,
+ 			       struct mlx5_esw_flow_attr *attr,
+ 			       u32 *action)
+ {
+-	int nest_level = attr->parse_attr->filter_dev->lower_level;
+ 	struct flow_action_entry vlan_act = {
+ 		.id = FLOW_ACTION_VLAN_POP,
+ 	};
+-	int err = 0;
++	int nest_level, err = 0;
+ 
++	nest_level = attr->parse_attr->filter_dev->lower_level -
++						priv->netdev->lower_level;
+ 	while (nest_level--) {
+ 		err = parse_tc_vlan_action(priv, &vlan_act, attr, action);
+ 		if (err)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/health.c b/drivers/net/ethernet/mellanox/mlx5/core/health.c
+index d9f4e8c59c1f..68e7ef7ca52d 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/health.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/health.c
+@@ -243,7 +243,7 @@ recover_from_sw_reset:
+ 		if (mlx5_get_nic_state(dev) == MLX5_NIC_IFC_DISABLED)
+ 			break;
+ 
+-		cond_resched();
++		msleep(20);
+ 	} while (!time_after(jiffies, end));
+ 
+ 	if (mlx5_get_nic_state(dev) != MLX5_NIC_IFC_DISABLED) {
+diff --git a/drivers/net/ethernet/mscc/ocelot.c b/drivers/net/ethernet/mscc/ocelot.c
+index d3b7373c5961..b14286dc49fb 100644
+--- a/drivers/net/ethernet/mscc/ocelot.c
++++ b/drivers/net/ethernet/mscc/ocelot.c
+@@ -183,44 +183,47 @@ static void ocelot_vlan_mode(struct ocelot *ocelot, int port,
+ 	ocelot_write(ocelot, val, ANA_VLANMASK);
+ }
+ 
+-void ocelot_port_vlan_filtering(struct ocelot *ocelot, int port,
+-				bool vlan_aware)
++static int ocelot_port_set_native_vlan(struct ocelot *ocelot, int port,
++				       u16 vid)
+ {
+ 	struct ocelot_port *ocelot_port = ocelot->ports[port];
+-	u32 val;
++	u32 val = 0;
+ 
+-	if (vlan_aware)
+-		val = ANA_PORT_VLAN_CFG_VLAN_AWARE_ENA |
+-		      ANA_PORT_VLAN_CFG_VLAN_POP_CNT(1);
+-	else
+-		val = 0;
+-	ocelot_rmw_gix(ocelot, val,
+-		       ANA_PORT_VLAN_CFG_VLAN_AWARE_ENA |
+-		       ANA_PORT_VLAN_CFG_VLAN_POP_CNT_M,
+-		       ANA_PORT_VLAN_CFG, port);
++	if (ocelot_port->vid != vid) {
++		/* Always permit deleting the native VLAN (vid = 0) */
++		if (ocelot_port->vid && vid) {
++			dev_err(ocelot->dev,
++				"Port already has a native VLAN: %d\n",
++				ocelot_port->vid);
++			return -EBUSY;
++		}
++		ocelot_port->vid = vid;
++	}
++
++	ocelot_rmw_gix(ocelot, REW_PORT_VLAN_CFG_PORT_VID(vid),
++		       REW_PORT_VLAN_CFG_PORT_VID_M,
++		       REW_PORT_VLAN_CFG, port);
+ 
+-	if (vlan_aware && !ocelot_port->vid)
++	if (ocelot_port->vlan_aware && !ocelot_port->vid)
+ 		/* If port is vlan-aware and tagged, drop untagged and priority
+ 		 * tagged frames.
+ 		 */
+ 		val = ANA_PORT_DROP_CFG_DROP_UNTAGGED_ENA |
+ 		      ANA_PORT_DROP_CFG_DROP_PRIO_S_TAGGED_ENA |
+ 		      ANA_PORT_DROP_CFG_DROP_PRIO_C_TAGGED_ENA;
+-	else
+-		val = 0;
+ 	ocelot_rmw_gix(ocelot, val,
+ 		       ANA_PORT_DROP_CFG_DROP_UNTAGGED_ENA |
+ 		       ANA_PORT_DROP_CFG_DROP_PRIO_S_TAGGED_ENA |
+ 		       ANA_PORT_DROP_CFG_DROP_PRIO_C_TAGGED_ENA,
+ 		       ANA_PORT_DROP_CFG, port);
+ 
+-	if (vlan_aware) {
++	if (ocelot_port->vlan_aware) {
+ 		if (ocelot_port->vid)
+ 			/* Tag all frames except when VID == DEFAULT_VLAN */
+-			val |= REW_TAG_CFG_TAG_CFG(1);
++			val = REW_TAG_CFG_TAG_CFG(1);
+ 		else
+ 			/* Tag all frames */
+-			val |= REW_TAG_CFG_TAG_CFG(3);
++			val = REW_TAG_CFG_TAG_CFG(3);
+ 	} else {
+ 		/* Port tagging disabled. */
+ 		val = REW_TAG_CFG_TAG_CFG(0);
+@@ -228,31 +231,31 @@ void ocelot_port_vlan_filtering(struct ocelot *ocelot, int port,
+ 	ocelot_rmw_gix(ocelot, val,
+ 		       REW_TAG_CFG_TAG_CFG_M,
+ 		       REW_TAG_CFG, port);
++
++	return 0;
+ }
+-EXPORT_SYMBOL(ocelot_port_vlan_filtering);
+ 
+-static int ocelot_port_set_native_vlan(struct ocelot *ocelot, int port,
+-				       u16 vid)
++void ocelot_port_vlan_filtering(struct ocelot *ocelot, int port,
++				bool vlan_aware)
+ {
+ 	struct ocelot_port *ocelot_port = ocelot->ports[port];
++	u32 val;
+ 
+-	if (ocelot_port->vid != vid) {
+-		/* Always permit deleting the native VLAN (vid = 0) */
+-		if (ocelot_port->vid && vid) {
+-			dev_err(ocelot->dev,
+-				"Port already has a native VLAN: %d\n",
+-				ocelot_port->vid);
+-			return -EBUSY;
+-		}
+-		ocelot_port->vid = vid;
+-	}
++	ocelot_port->vlan_aware = vlan_aware;
+ 
+-	ocelot_rmw_gix(ocelot, REW_PORT_VLAN_CFG_PORT_VID(vid),
+-		       REW_PORT_VLAN_CFG_PORT_VID_M,
+-		       REW_PORT_VLAN_CFG, port);
++	if (vlan_aware)
++		val = ANA_PORT_VLAN_CFG_VLAN_AWARE_ENA |
++		      ANA_PORT_VLAN_CFG_VLAN_POP_CNT(1);
++	else
++		val = 0;
++	ocelot_rmw_gix(ocelot, val,
++		       ANA_PORT_VLAN_CFG_VLAN_AWARE_ENA |
++		       ANA_PORT_VLAN_CFG_VLAN_POP_CNT_M,
++		       ANA_PORT_VLAN_CFG, port);
+ 
+-	return 0;
++	ocelot_port_set_native_vlan(ocelot, port, ocelot_port->vid);
+ }
++EXPORT_SYMBOL(ocelot_port_vlan_filtering);
+ 
+ /* Default vlan to clasify for untagged frames (may be zero) */
+ static void ocelot_port_set_pvid(struct ocelot *ocelot, int port, u16 pvid)
+@@ -858,12 +861,12 @@ static void ocelot_get_stats64(struct net_device *dev,
+ }
+ 
+ int ocelot_fdb_add(struct ocelot *ocelot, int port,
+-		   const unsigned char *addr, u16 vid, bool vlan_aware)
++		   const unsigned char *addr, u16 vid)
+ {
+ 	struct ocelot_port *ocelot_port = ocelot->ports[port];
+ 
+ 	if (!vid) {
+-		if (!vlan_aware)
++		if (!ocelot_port->vlan_aware)
+ 			/* If the bridge is not VLAN aware and no VID was
+ 			 * provided, set it to pvid to ensure the MAC entry
+ 			 * matches incoming untagged packets
+@@ -890,7 +893,7 @@ static int ocelot_port_fdb_add(struct ndmsg *ndm, struct nlattr *tb[],
+ 	struct ocelot *ocelot = priv->port.ocelot;
+ 	int port = priv->chip_port;
+ 
+-	return ocelot_fdb_add(ocelot, port, addr, vid, priv->vlan_aware);
++	return ocelot_fdb_add(ocelot, port, addr, vid);
+ }
+ 
+ int ocelot_fdb_del(struct ocelot *ocelot, int port,
+@@ -1489,8 +1492,8 @@ static int ocelot_port_attr_set(struct net_device *dev,
+ 		ocelot_port_attr_ageing_set(ocelot, port, attr->u.ageing_time);
+ 		break;
+ 	case SWITCHDEV_ATTR_ID_BRIDGE_VLAN_FILTERING:
+-		priv->vlan_aware = attr->u.vlan_filtering;
+-		ocelot_port_vlan_filtering(ocelot, port, priv->vlan_aware);
++		ocelot_port_vlan_filtering(ocelot, port,
++					   attr->u.vlan_filtering);
+ 		break;
+ 	case SWITCHDEV_ATTR_ID_BRIDGE_MC_DISABLED:
+ 		ocelot_port_attr_mc_set(ocelot, port, !attr->u.mc_disabled);
+@@ -1861,7 +1864,6 @@ static int ocelot_netdevice_port_event(struct net_device *dev,
+ 			} else {
+ 				err = ocelot_port_bridge_leave(ocelot, port,
+ 							       info->upper_dev);
+-				priv->vlan_aware = false;
+ 			}
+ 		}
+ 		if (netif_is_lag_master(info->upper_dev)) {
+diff --git a/drivers/net/ethernet/mscc/ocelot.h b/drivers/net/ethernet/mscc/ocelot.h
+index 04372ba72fec..8e67fc40db0d 100644
+--- a/drivers/net/ethernet/mscc/ocelot.h
++++ b/drivers/net/ethernet/mscc/ocelot.h
+@@ -66,8 +66,6 @@ struct ocelot_port_private {
+ 	struct phy_device *phy;
+ 	u8 chip_port;
+ 
+-	u8 vlan_aware;
+-
+ 	struct phy *serdes;
+ 
+ 	struct ocelot_port_tc tc;
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-sunxi.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-sunxi.c
+index 7d40760e9ba8..0e1ca2cba3c7 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-sunxi.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-sunxi.c
+@@ -150,6 +150,8 @@ static int sun7i_gmac_probe(struct platform_device *pdev)
+ 	plat_dat->init = sun7i_gmac_init;
+ 	plat_dat->exit = sun7i_gmac_exit;
+ 	plat_dat->fix_mac_speed = sun7i_fix_speed;
++	plat_dat->tx_fifo_size = 4096;
++	plat_dat->rx_fifo_size = 16384;
+ 
+ 	ret = sun7i_gmac_init(pdev, plat_dat->bsp_priv);
+ 	if (ret)
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c
+index 67b754a56288..a7d7a05d2aff 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwxgmac2_core.c
+@@ -576,8 +576,13 @@ static void dwxgmac2_update_vlan_hash(struct mac_device_info *hw, u32 hash,
+ 			value |= XGMAC_VLAN_EDVLP;
+ 			value |= XGMAC_VLAN_ESVL;
+ 			value |= XGMAC_VLAN_DOVLTC;
++		} else {
++			value &= ~XGMAC_VLAN_EDVLP;
++			value &= ~XGMAC_VLAN_ESVL;
++			value &= ~XGMAC_VLAN_DOVLTC;
+ 		}
+ 
++		value &= ~XGMAC_VLAN_VID;
+ 		writel(value, ioaddr + XGMAC_VLAN_TAG);
+ 	} else if (perfect_match) {
+ 		u32 value = readl(ioaddr + XGMAC_PACKET_FILTER);
+@@ -588,13 +593,19 @@ static void dwxgmac2_update_vlan_hash(struct mac_device_info *hw, u32 hash,
+ 
+ 		value = readl(ioaddr + XGMAC_VLAN_TAG);
+ 
++		value &= ~XGMAC_VLAN_VTHM;
+ 		value |= XGMAC_VLAN_ETV;
+ 		if (is_double) {
+ 			value |= XGMAC_VLAN_EDVLP;
+ 			value |= XGMAC_VLAN_ESVL;
+ 			value |= XGMAC_VLAN_DOVLTC;
++		} else {
++			value &= ~XGMAC_VLAN_EDVLP;
++			value &= ~XGMAC_VLAN_ESVL;
++			value &= ~XGMAC_VLAN_DOVLTC;
+ 		}
+ 
++		value &= ~XGMAC_VLAN_VID;
+ 		writel(value | perfect_match, ioaddr + XGMAC_VLAN_TAG);
+ 	} else {
+ 		u32 value = readl(ioaddr + XGMAC_PACKET_FILTER);
+diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c
+index 92bc2b2df660..061aada4748a 100644
+--- a/drivers/net/macsec.c
++++ b/drivers/net/macsec.c
+@@ -3463,7 +3463,7 @@ static int macsec_changelink(struct net_device *dev, struct nlattr *tb[],
+ 			     struct netlink_ext_ack *extack)
+ {
+ 	struct macsec_dev *macsec = macsec_priv(dev);
+-	struct macsec_tx_sa tx_sc;
++	struct macsec_tx_sc tx_sc;
+ 	struct macsec_secy secy;
+ 	int ret;
+ 
+diff --git a/drivers/net/phy/marvell.c b/drivers/net/phy/marvell.c
+index 9a8badafea8a..561df5e33f65 100644
+--- a/drivers/net/phy/marvell.c
++++ b/drivers/net/phy/marvell.c
+@@ -1278,6 +1278,30 @@ static int marvell_read_status_page_an(struct phy_device *phydev,
+ 	int lpa;
+ 	int err;
+ 
++	if (!(status & MII_M1011_PHY_STATUS_RESOLVED)) {
++		phydev->link = 0;
++		return 0;
++	}
++
++	if (status & MII_M1011_PHY_STATUS_FULLDUPLEX)
++		phydev->duplex = DUPLEX_FULL;
++	else
++		phydev->duplex = DUPLEX_HALF;
++
++	switch (status & MII_M1011_PHY_STATUS_SPD_MASK) {
++	case MII_M1011_PHY_STATUS_1000:
++		phydev->speed = SPEED_1000;
++		break;
++
++	case MII_M1011_PHY_STATUS_100:
++		phydev->speed = SPEED_100;
++		break;
++
++	default:
++		phydev->speed = SPEED_10;
++		break;
++	}
++
+ 	if (!fiber) {
+ 		err = genphy_read_lpa(phydev);
+ 		if (err < 0)
+@@ -1306,28 +1330,6 @@ static int marvell_read_status_page_an(struct phy_device *phydev,
+ 		}
+ 	}
+ 
+-	if (!(status & MII_M1011_PHY_STATUS_RESOLVED))
+-		return 0;
+-
+-	if (status & MII_M1011_PHY_STATUS_FULLDUPLEX)
+-		phydev->duplex = DUPLEX_FULL;
+-	else
+-		phydev->duplex = DUPLEX_HALF;
+-
+-	switch (status & MII_M1011_PHY_STATUS_SPD_MASK) {
+-	case MII_M1011_PHY_STATUS_1000:
+-		phydev->speed = SPEED_1000;
+-		break;
+-
+-	case MII_M1011_PHY_STATUS_100:
+-		phydev->speed = SPEED_100;
+-		break;
+-
+-	default:
+-		phydev->speed = SPEED_10;
+-		break;
+-	}
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/phy/micrel.c b/drivers/net/phy/micrel.c
+index 51b64f087717..663c68ed6ef9 100644
+--- a/drivers/net/phy/micrel.c
++++ b/drivers/net/phy/micrel.c
+@@ -1154,7 +1154,7 @@ static struct phy_driver ksphy_driver[] = {
+ 	.driver_data	= &ksz9021_type,
+ 	.probe		= kszphy_probe,
+ 	.config_init	= ksz9131_config_init,
+-	.read_status	= ksz9031_read_status,
++	.read_status	= genphy_read_status,
+ 	.ack_interrupt	= kszphy_ack_interrupt,
+ 	.config_intr	= kszphy_config_intr,
+ 	.get_sset_count = kszphy_get_sset_count,
+diff --git a/drivers/net/tun.c b/drivers/net/tun.c
+index 9de9b7d8aedd..3063f2c9fa63 100644
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -1925,6 +1925,7 @@ drop:
+ 
+ 	skb_reset_network_header(skb);
+ 	skb_probe_transport_header(skb);
++	skb_record_rx_queue(skb, tfile->queue_index);
+ 
+ 	if (skb_xdp) {
+ 		struct bpf_prog *xdp_prog;
+@@ -2498,6 +2499,7 @@ build:
+ 	skb->protocol = eth_type_trans(skb, tun->dev);
+ 	skb_reset_network_header(skb);
+ 	skb_probe_transport_header(skb);
++	skb_record_rx_queue(skb, tfile->queue_index);
+ 
+ 	if (skb_xdp) {
+ 		err = do_xdp_generic(xdp_prog, skb);
+@@ -2509,7 +2511,6 @@ build:
+ 	    !tfile->detached)
+ 		rxhash = __skb_get_hash_symmetric(skb);
+ 
+-	skb_record_rx_queue(skb, tfile->queue_index);
+ 	netif_receive_skb(skb);
+ 
+ 	/* No need for get_cpu_ptr() here since this function is
+diff --git a/drivers/net/wireless/mac80211_hwsim.c b/drivers/net/wireless/mac80211_hwsim.c
+index 03738107fd10..151752c00727 100644
+--- a/drivers/net/wireless/mac80211_hwsim.c
++++ b/drivers/net/wireless/mac80211_hwsim.c
+@@ -3600,9 +3600,9 @@ static int hwsim_new_radio_nl(struct sk_buff *msg, struct genl_info *info)
+ 	}
+ 
+ 	if (info->attrs[HWSIM_ATTR_RADIO_NAME]) {
+-		hwname = kasprintf(GFP_KERNEL, "%.*s",
+-				   nla_len(info->attrs[HWSIM_ATTR_RADIO_NAME]),
+-				   (char *)nla_data(info->attrs[HWSIM_ATTR_RADIO_NAME]));
++		hwname = kstrndup((char *)nla_data(info->attrs[HWSIM_ATTR_RADIO_NAME]),
++				  nla_len(info->attrs[HWSIM_ATTR_RADIO_NAME]),
++				  GFP_KERNEL);
+ 		if (!hwname)
+ 			return -ENOMEM;
+ 		param.hwname = hwname;
+@@ -3622,9 +3622,9 @@ static int hwsim_del_radio_nl(struct sk_buff *msg, struct genl_info *info)
+ 	if (info->attrs[HWSIM_ATTR_RADIO_ID]) {
+ 		idx = nla_get_u32(info->attrs[HWSIM_ATTR_RADIO_ID]);
+ 	} else if (info->attrs[HWSIM_ATTR_RADIO_NAME]) {
+-		hwname = kasprintf(GFP_KERNEL, "%.*s",
+-				   nla_len(info->attrs[HWSIM_ATTR_RADIO_NAME]),
+-				   (char *)nla_data(info->attrs[HWSIM_ATTR_RADIO_NAME]));
++		hwname = kstrndup((char *)nla_data(info->attrs[HWSIM_ATTR_RADIO_NAME]),
++				  nla_len(info->attrs[HWSIM_ATTR_RADIO_NAME]),
++				  GFP_KERNEL);
+ 		if (!hwname)
+ 			return -ENOMEM;
+ 	} else
+diff --git a/drivers/platform/chrome/cros_ec_rpmsg.c b/drivers/platform/chrome/cros_ec_rpmsg.c
+index dbc3f5523b83..7e8629e3db74 100644
+--- a/drivers/platform/chrome/cros_ec_rpmsg.c
++++ b/drivers/platform/chrome/cros_ec_rpmsg.c
+@@ -44,6 +44,8 @@ struct cros_ec_rpmsg {
+ 	struct completion xfer_ack;
+ 	struct work_struct host_event_work;
+ 	struct rpmsg_endpoint *ept;
++	bool has_pending_host_event;
++	bool probe_done;
+ };
+ 
+ /**
+@@ -177,7 +179,14 @@ static int cros_ec_rpmsg_callback(struct rpmsg_device *rpdev, void *data,
+ 		memcpy(ec_dev->din, resp->data, len);
+ 		complete(&ec_rpmsg->xfer_ack);
+ 	} else if (resp->type == HOST_EVENT_MARK) {
+-		schedule_work(&ec_rpmsg->host_event_work);
++		/*
++		 * If the host event is sent before cros_ec_register is
++		 * finished, queue the host event.
++		 */
++		if (ec_rpmsg->probe_done)
++			schedule_work(&ec_rpmsg->host_event_work);
++		else
++			ec_rpmsg->has_pending_host_event = true;
+ 	} else {
+ 		dev_warn(ec_dev->dev, "rpmsg received invalid type = %d",
+ 			 resp->type);
+@@ -240,6 +249,11 @@ static int cros_ec_rpmsg_probe(struct rpmsg_device *rpdev)
+ 		return ret;
+ 	}
+ 
++	ec_rpmsg->probe_done = true;
++
++	if (ec_rpmsg->has_pending_host_event)
++		schedule_work(&ec_rpmsg->host_event_work);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/pwm/pwm-pca9685.c b/drivers/pwm/pwm-pca9685.c
+index b07bdca3d510..590375be5214 100644
+--- a/drivers/pwm/pwm-pca9685.c
++++ b/drivers/pwm/pwm-pca9685.c
+@@ -20,6 +20,7 @@
+ #include <linux/slab.h>
+ #include <linux/delay.h>
+ #include <linux/pm_runtime.h>
++#include <linux/bitmap.h>
+ 
+ /*
+  * Because the PCA9685 has only one prescaler per chip, changing the period of
+@@ -74,6 +75,7 @@ struct pca9685 {
+ #if IS_ENABLED(CONFIG_GPIOLIB)
+ 	struct mutex lock;
+ 	struct gpio_chip gpio;
++	DECLARE_BITMAP(pwms_inuse, PCA9685_MAXCHAN + 1);
+ #endif
+ };
+ 
+@@ -83,51 +85,51 @@ static inline struct pca9685 *to_pca(struct pwm_chip *chip)
+ }
+ 
+ #if IS_ENABLED(CONFIG_GPIOLIB)
+-static int pca9685_pwm_gpio_request(struct gpio_chip *gpio, unsigned int offset)
++static bool pca9685_pwm_test_and_set_inuse(struct pca9685 *pca, int pwm_idx)
+ {
+-	struct pca9685 *pca = gpiochip_get_data(gpio);
+-	struct pwm_device *pwm;
++	bool is_inuse;
+ 
+ 	mutex_lock(&pca->lock);
+-
+-	pwm = &pca->chip.pwms[offset];
+-
+-	if (pwm->flags & (PWMF_REQUESTED | PWMF_EXPORTED)) {
+-		mutex_unlock(&pca->lock);
+-		return -EBUSY;
++	if (pwm_idx >= PCA9685_MAXCHAN) {
++		/*
++		 * "all LEDs" channel:
++		 * pretend already in use if any of the PWMs are requested
++		 */
++		if (!bitmap_empty(pca->pwms_inuse, PCA9685_MAXCHAN)) {
++			is_inuse = true;
++			goto out;
++		}
++	} else {
++		/*
++		 * regular channel:
++		 * pretend already in use if the "all LEDs" channel is requested
++		 */
++		if (test_bit(PCA9685_MAXCHAN, pca->pwms_inuse)) {
++			is_inuse = true;
++			goto out;
++		}
+ 	}
+-
+-	pwm_set_chip_data(pwm, (void *)1);
+-
++	is_inuse = test_and_set_bit(pwm_idx, pca->pwms_inuse);
++out:
+ 	mutex_unlock(&pca->lock);
+-	pm_runtime_get_sync(pca->chip.dev);
+-	return 0;
++	return is_inuse;
+ }
+ 
+-static bool pca9685_pwm_is_gpio(struct pca9685 *pca, struct pwm_device *pwm)
++static void pca9685_pwm_clear_inuse(struct pca9685 *pca, int pwm_idx)
+ {
+-	bool is_gpio = false;
+-
+ 	mutex_lock(&pca->lock);
++	clear_bit(pwm_idx, pca->pwms_inuse);
++	mutex_unlock(&pca->lock);
++}
+ 
+-	if (pwm->hwpwm >= PCA9685_MAXCHAN) {
+-		unsigned int i;
+-
+-		/*
+-		 * Check if any of the GPIOs are requested and in that case
+-		 * prevent using the "all LEDs" channel.
+-		 */
+-		for (i = 0; i < pca->gpio.ngpio; i++)
+-			if (gpiochip_is_requested(&pca->gpio, i)) {
+-				is_gpio = true;
+-				break;
+-			}
+-	} else if (pwm_get_chip_data(pwm)) {
+-		is_gpio = true;
+-	}
++static int pca9685_pwm_gpio_request(struct gpio_chip *gpio, unsigned int offset)
++{
++	struct pca9685 *pca = gpiochip_get_data(gpio);
+ 
+-	mutex_unlock(&pca->lock);
+-	return is_gpio;
++	if (pca9685_pwm_test_and_set_inuse(pca, offset))
++		return -EBUSY;
++	pm_runtime_get_sync(pca->chip.dev);
++	return 0;
+ }
+ 
+ static int pca9685_pwm_gpio_get(struct gpio_chip *gpio, unsigned int offset)
+@@ -162,6 +164,7 @@ static void pca9685_pwm_gpio_free(struct gpio_chip *gpio, unsigned int offset)
+ 
+ 	pca9685_pwm_gpio_set(gpio, offset, 0);
+ 	pm_runtime_put(pca->chip.dev);
++	pca9685_pwm_clear_inuse(pca, offset);
+ }
+ 
+ static int pca9685_pwm_gpio_get_direction(struct gpio_chip *chip,
+@@ -213,12 +216,17 @@ static int pca9685_pwm_gpio_probe(struct pca9685 *pca)
+ 	return devm_gpiochip_add_data(dev, &pca->gpio, pca);
+ }
+ #else
+-static inline bool pca9685_pwm_is_gpio(struct pca9685 *pca,
+-				       struct pwm_device *pwm)
++static inline bool pca9685_pwm_test_and_set_inuse(struct pca9685 *pca,
++						  int pwm_idx)
+ {
+ 	return false;
+ }
+ 
++static inline void
++pca9685_pwm_clear_inuse(struct pca9685 *pca, int pwm_idx)
++{
++}
++
+ static inline int pca9685_pwm_gpio_probe(struct pca9685 *pca)
+ {
+ 	return 0;
+@@ -402,7 +410,7 @@ static int pca9685_pwm_request(struct pwm_chip *chip, struct pwm_device *pwm)
+ {
+ 	struct pca9685 *pca = to_pca(chip);
+ 
+-	if (pca9685_pwm_is_gpio(pca, pwm))
++	if (pca9685_pwm_test_and_set_inuse(pca, pwm->hwpwm))
+ 		return -EBUSY;
+ 	pm_runtime_get_sync(chip->dev);
+ 
+@@ -411,8 +419,11 @@ static int pca9685_pwm_request(struct pwm_chip *chip, struct pwm_device *pwm)
+ 
+ static void pca9685_pwm_free(struct pwm_chip *chip, struct pwm_device *pwm)
+ {
++	struct pca9685 *pca = to_pca(chip);
++
+ 	pca9685_pwm_disable(chip, pwm);
+ 	pm_runtime_put(chip->dev);
++	pca9685_pwm_clear_inuse(pca, pwm->hwpwm);
+ }
+ 
+ static const struct pwm_ops pca9685_pwm_ops = {
+diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
+index 06758a5d9eb1..52c379873c56 100644
+--- a/drivers/scsi/ufs/ufshcd.c
++++ b/drivers/scsi/ufs/ufshcd.c
+@@ -1518,6 +1518,11 @@ start:
+ 		 */
+ 		if (ufshcd_can_hibern8_during_gating(hba) &&
+ 		    ufshcd_is_link_hibern8(hba)) {
++			if (async) {
++				rc = -EAGAIN;
++				hba->clk_gating.active_reqs--;
++				break;
++			}
+ 			spin_unlock_irqrestore(hba->host->host_lock, flags);
+ 			flush_work(&hba->clk_gating.ungate_work);
+ 			spin_lock_irqsave(hba->host->host_lock, flags);
+diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c
+index 09e55ea0bf5d..9fc7e374a29b 100644
+--- a/drivers/target/iscsi/iscsi_target.c
++++ b/drivers/target/iscsi/iscsi_target.c
+@@ -4368,8 +4368,7 @@ int iscsit_close_session(struct iscsi_session *sess)
+ 	 * restart the timer and exit.
+ 	 */
+ 	if (!in_interrupt()) {
+-		if (iscsit_check_session_usage_count(sess) == 1)
+-			iscsit_stop_session(sess, 1, 1);
++		iscsit_check_session_usage_count(sess);
+ 	} else {
+ 		if (iscsit_check_session_usage_count(sess) == 2) {
+ 			atomic_set(&sess->session_logout, 0);
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index a83aeccafae3..4d3c79d90a6e 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -2570,10 +2570,8 @@ static void dwc3_gadget_endpoint_transfer_in_progress(struct dwc3_ep *dep,
+ 
+ 	dwc3_gadget_ep_cleanup_completed_requests(dep, event, status);
+ 
+-	if (stop) {
++	if (stop)
+ 		dwc3_stop_active_transfer(dep, true, true);
+-		dep->flags = DWC3_EP_ENABLED;
+-	}
+ 
+ 	/*
+ 	 * WORKAROUND: This is the 2nd half of U1/U2 -> U0 workaround.
+diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
+index 4bb0f9e4f3f4..696e769d069a 100644
+--- a/fs/btrfs/relocation.c
++++ b/fs/btrfs/relocation.c
+@@ -561,8 +561,8 @@ static int should_ignore_root(struct btrfs_root *root)
+ 	if (!reloc_root)
+ 		return 0;
+ 
+-	if (btrfs_root_last_snapshot(&reloc_root->root_item) ==
+-	    root->fs_info->running_transaction->transid - 1)
++	if (btrfs_header_generation(reloc_root->commit_root) ==
++	    root->fs_info->running_transaction->transid)
+ 		return 0;
+ 	/*
+ 	 * if there is reloc tree and it was created in previous
+diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c
+index 954013d6076b..c5e190fd4589 100644
+--- a/fs/ext4/extents.c
++++ b/fs/ext4/extents.c
+@@ -3532,8 +3532,8 @@ static int ext4_ext_convert_to_initialized(handle_t *handle,
+ 		(unsigned long long)map->m_lblk, map_len);
+ 
+ 	sbi = EXT4_SB(inode->i_sb);
+-	eof_block = (inode->i_size + inode->i_sb->s_blocksize - 1) >>
+-		inode->i_sb->s_blocksize_bits;
++	eof_block = (EXT4_I(inode)->i_disksize + inode->i_sb->s_blocksize - 1)
++			>> inode->i_sb->s_blocksize_bits;
+ 	if (eof_block < map->m_lblk + map_len)
+ 		eof_block = map->m_lblk + map_len;
+ 
+@@ -3785,8 +3785,8 @@ static int ext4_split_convert_extents(handle_t *handle,
+ 		  __func__, inode->i_ino,
+ 		  (unsigned long long)map->m_lblk, map->m_len);
+ 
+-	eof_block = (inode->i_size + inode->i_sb->s_blocksize - 1) >>
+-		inode->i_sb->s_blocksize_bits;
++	eof_block = (EXT4_I(inode)->i_disksize + inode->i_sb->s_blocksize - 1)
++			>> inode->i_sb->s_blocksize_bits;
+ 	if (eof_block < map->m_lblk + map->m_len)
+ 		eof_block = map->m_lblk + map->m_len;
+ 	/*
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index 0c7c4adb664e..4f0444f3cda3 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -4157,7 +4157,7 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
+ 	if (sbi->s_inodes_per_group < sbi->s_inodes_per_block ||
+ 	    sbi->s_inodes_per_group > blocksize * 8) {
+ 		ext4_msg(sb, KERN_ERR, "invalid inodes per group: %lu\n",
+-			 sbi->s_blocks_per_group);
++			 sbi->s_inodes_per_group);
+ 		goto failed_mount;
+ 	}
+ 	sbi->s_itb_per_group = sbi->s_inodes_per_group /
+@@ -4286,9 +4286,9 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
+ 			EXT4_BLOCKS_PER_GROUP(sb) - 1);
+ 	do_div(blocks_count, EXT4_BLOCKS_PER_GROUP(sb));
+ 	if (blocks_count > ((uint64_t)1<<32) - EXT4_DESC_PER_BLOCK(sb)) {
+-		ext4_msg(sb, KERN_WARNING, "groups count too large: %u "
++		ext4_msg(sb, KERN_WARNING, "groups count too large: %llu "
+ 		       "(block count %llu, first data block %u, "
+-		       "blocks per group %lu)", sbi->s_groups_count,
++		       "blocks per group %lu)", blocks_count,
+ 		       ext4_blocks_count(es),
+ 		       le32_to_cpu(es->s_first_data_block),
+ 		       EXT4_BLOCKS_PER_GROUP(sb));
+diff --git a/fs/jbd2/commit.c b/fs/jbd2/commit.c
+index 27373f5792a4..e855d8260433 100644
+--- a/fs/jbd2/commit.c
++++ b/fs/jbd2/commit.c
+@@ -997,9 +997,10 @@ restart_loop:
+ 			 * journalled data) we need to unmap buffer and clear
+ 			 * more bits. We also need to be careful about the check
+ 			 * because the data page mapping can get cleared under
+-			 * out hands, which alse need not to clear more bits
+-			 * because the page and buffers will be freed and can
+-			 * never be reused once we are done with them.
++			 * our hands. Note that if mapping == NULL, we don't
++			 * need to make buffer unmapped because the page is
++			 * already detached from the mapping and buffers cannot
++			 * get reused.
+ 			 */
+ 			mapping = READ_ONCE(bh->b_page->mapping);
+ 			if (mapping && !sb_is_blkdev_sb(mapping->host->i_sb)) {
+diff --git a/fs/overlayfs/inode.c b/fs/overlayfs/inode.c
+index 79e8994e3bc1..3f993c114829 100644
+--- a/fs/overlayfs/inode.c
++++ b/fs/overlayfs/inode.c
+@@ -891,7 +891,7 @@ struct inode *ovl_get_inode(struct super_block *sb,
+ 	struct dentry *lowerdentry = lowerpath ? lowerpath->dentry : NULL;
+ 	bool bylower = ovl_hash_bylower(sb, upperdentry, lowerdentry,
+ 					oip->index);
+-	int fsid = bylower ? oip->lowerpath->layer->fsid : 0;
++	int fsid = bylower ? lowerpath->layer->fsid : 0;
+ 	bool is_dir, metacopy = false;
+ 	unsigned long ino = 0;
+ 	int err = oip->newinode ? -EEXIST : -ENOMEM;
+@@ -941,6 +941,8 @@ struct inode *ovl_get_inode(struct super_block *sb,
+ 			err = -ENOMEM;
+ 			goto out_err;
+ 		}
++		ino = realinode->i_ino;
++		fsid = lowerpath->layer->fsid;
+ 	}
+ 	ovl_fill_inode(inode, realinode->i_mode, realinode->i_rdev, ino, fsid);
+ 	ovl_inode_init(inode, upperdentry, lowerdentry, oip->lowerdata);
+diff --git a/fs/proc/base.c b/fs/proc/base.c
+index c7c64272b0fa..a8a38790e1bd 100644
+--- a/fs/proc/base.c
++++ b/fs/proc/base.c
+@@ -1573,6 +1573,7 @@ static ssize_t timens_offsets_write(struct file *file, const char __user *buf,
+ 	noffsets = 0;
+ 	for (pos = kbuf; pos; pos = next_line) {
+ 		struct proc_timens_offset *off = &offsets[noffsets];
++		char clock[10];
+ 		int err;
+ 
+ 		/* Find the end of line and ensure we don't look past it */
+@@ -1584,10 +1585,21 @@ static ssize_t timens_offsets_write(struct file *file, const char __user *buf,
+ 				next_line = NULL;
+ 		}
+ 
+-		err = sscanf(pos, "%u %lld %lu", &off->clockid,
++		err = sscanf(pos, "%9s %lld %lu", clock,
+ 				&off->val.tv_sec, &off->val.tv_nsec);
+ 		if (err != 3 || off->val.tv_nsec >= NSEC_PER_SEC)
+ 			goto out;
++
++		clock[sizeof(clock) - 1] = 0;
++		if (strcmp(clock, "monotonic") == 0 ||
++		    strcmp(clock, __stringify(CLOCK_MONOTONIC)) == 0)
++			off->clockid = CLOCK_MONOTONIC;
++		else if (strcmp(clock, "boottime") == 0 ||
++			 strcmp(clock, __stringify(CLOCK_BOOTTIME)) == 0)
++			off->clockid = CLOCK_BOOTTIME;
++		else
++			goto out;
++
+ 		noffsets++;
+ 		if (noffsets == ARRAY_SIZE(offsets)) {
+ 			if (next_line)
+diff --git a/include/net/ip6_route.h b/include/net/ip6_route.h
+index b69c16cbbf71..2d0d91070268 100644
+--- a/include/net/ip6_route.h
++++ b/include/net/ip6_route.h
+@@ -254,6 +254,7 @@ static inline bool ipv6_anycast_destination(const struct dst_entry *dst,
+ 
+ 	return rt->rt6i_flags & RTF_ANYCAST ||
+ 		(rt->rt6i_dst.plen < 127 &&
++		 !(rt->rt6i_flags & (RTF_GATEWAY | RTF_NONEXTHOP)) &&
+ 		 ipv6_addr_equal(&rt->rt6i_dst.addr, daddr));
+ }
+ 
+diff --git a/include/soc/mscc/ocelot.h b/include/soc/mscc/ocelot.h
+index 068f96b1a83e..f8e1955c86f1 100644
+--- a/include/soc/mscc/ocelot.h
++++ b/include/soc/mscc/ocelot.h
+@@ -411,6 +411,8 @@ struct ocelot_port {
+ 
+ 	void __iomem			*regs;
+ 
++	bool				vlan_aware;
++
+ 	/* Ingress default VLAN (pvid) */
+ 	u16				pvid;
+ 
+@@ -529,7 +531,7 @@ int ocelot_port_bridge_leave(struct ocelot *ocelot, int port,
+ int ocelot_fdb_dump(struct ocelot *ocelot, int port,
+ 		    dsa_fdb_dump_cb_t *cb, void *data);
+ int ocelot_fdb_add(struct ocelot *ocelot, int port,
+-		   const unsigned char *addr, u16 vid, bool vlan_aware);
++		   const unsigned char *addr, u16 vid);
+ int ocelot_fdb_del(struct ocelot *ocelot, int port,
+ 		   const unsigned char *addr, u16 vid);
+ int ocelot_vlan_add(struct ocelot *ocelot, int port, u16 vid, bool pvid,
+diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
+index c0a9865b1f6a..fbb484a2e3e8 100644
+--- a/kernel/rcu/tree.c
++++ b/kernel/rcu/tree.c
+@@ -816,7 +816,7 @@ static __always_inline void rcu_nmi_enter_common(bool irq)
+ 			rcu_cleanup_after_idle();
+ 
+ 		incby = 1;
+-	} else if (tick_nohz_full_cpu(rdp->cpu) &&
++	} else if (irq && tick_nohz_full_cpu(rdp->cpu) &&
+ 		   rdp->dynticks_nmi_nesting == DYNTICK_IRQ_NONIDLE &&
+ 		   READ_ONCE(rdp->rcu_urgent_qs) && !rdp->rcu_forced_tick) {
+ 		raw_spin_lock_rcu_node(rdp->mynode);
+diff --git a/kernel/time/namespace.c b/kernel/time/namespace.c
+index 6477c6d0e1a6..f4560b4931df 100644
+--- a/kernel/time/namespace.c
++++ b/kernel/time/namespace.c
+@@ -337,7 +337,20 @@ static struct user_namespace *timens_owner(struct ns_common *ns)
+ 
+ static void show_offset(struct seq_file *m, int clockid, struct timespec64 *ts)
+ {
+-	seq_printf(m, "%d %lld %ld\n", clockid, ts->tv_sec, ts->tv_nsec);
++	char *clock;
++
++	switch (clockid) {
++	case CLOCK_BOOTTIME:
++		clock = "boottime";
++		break;
++	case CLOCK_MONOTONIC:
++		clock = "monotonic";
++		break;
++	default:
++		clock = "unknown";
++		break;
++	}
++	seq_printf(m, "%-10s %10lld %9ld\n", clock, ts->tv_sec, ts->tv_nsec);
+ }
+ 
+ void proc_timens_show_offsets(struct task_struct *p, struct seq_file *m)
+diff --git a/kernel/trace/trace_events_trigger.c b/kernel/trace/trace_events_trigger.c
+index dd34a1b46a86..3a74736da363 100644
+--- a/kernel/trace/trace_events_trigger.c
++++ b/kernel/trace/trace_events_trigger.c
+@@ -1088,14 +1088,10 @@ register_snapshot_trigger(char *glob, struct event_trigger_ops *ops,
+ 			  struct event_trigger_data *data,
+ 			  struct trace_event_file *file)
+ {
+-	int ret = register_trigger(glob, ops, data, file);
+-
+-	if (ret > 0 && tracing_alloc_snapshot_instance(file->tr) != 0) {
+-		unregister_trigger(glob, ops, data, file);
+-		ret = 0;
+-	}
++	if (tracing_alloc_snapshot_instance(file->tr) != 0)
++		return 0;
+ 
+-	return ret;
++	return register_trigger(glob, ops, data, file);
+ }
+ 
+ static int
+diff --git a/net/bpfilter/main.c b/net/bpfilter/main.c
+index efea4874743e..05e1cfc1e5cd 100644
+--- a/net/bpfilter/main.c
++++ b/net/bpfilter/main.c
+@@ -35,7 +35,6 @@ static void loop(void)
+ 		struct mbox_reply reply;
+ 		int n;
+ 
+-		fprintf(debug_f, "testing the buffer\n");
+ 		n = read(0, &req, sizeof(req));
+ 		if (n != sizeof(req)) {
+ 			fprintf(debug_f, "invalid request %d\n", n);
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 500bba8874b0..77c154107b0d 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -4140,7 +4140,8 @@ EXPORT_SYMBOL(netdev_max_backlog);
+ 
+ int netdev_tstamp_prequeue __read_mostly = 1;
+ int netdev_budget __read_mostly = 300;
+-unsigned int __read_mostly netdev_budget_usecs = 2000;
++/* Must be at least 2 jiffes to guarantee 1 jiffy timeout */
++unsigned int __read_mostly netdev_budget_usecs = 2 * USEC_PER_SEC / HZ;
+ int weight_p __read_mostly = 64;           /* old backlog weight */
+ int dev_weight_rx_bias __read_mostly = 1;  /* bias for backlog weight */
+ int dev_weight_tx_bias __read_mostly = 1;  /* bias for output_queue quota */
+diff --git a/net/hsr/hsr_netlink.c b/net/hsr/hsr_netlink.c
+index fae21c863b1f..55c0b2e872a5 100644
+--- a/net/hsr/hsr_netlink.c
++++ b/net/hsr/hsr_netlink.c
+@@ -61,10 +61,16 @@ static int hsr_newlink(struct net *src_net, struct net_device *dev,
+ 	else
+ 		multicast_spec = nla_get_u8(data[IFLA_HSR_MULTICAST_SPEC]);
+ 
+-	if (!data[IFLA_HSR_VERSION])
++	if (!data[IFLA_HSR_VERSION]) {
+ 		hsr_version = 0;
+-	else
++	} else {
+ 		hsr_version = nla_get_u8(data[IFLA_HSR_VERSION]);
++		if (hsr_version > 1) {
++			NL_SET_ERR_MSG_MOD(extack,
++					   "Only versions 0..1 are supported");
++			return -EINVAL;
++		}
++	}
+ 
+ 	return hsr_dev_finalize(dev, link, multicast_spec, hsr_version);
+ }
+diff --git a/net/ipv4/devinet.c b/net/ipv4/devinet.c
+index e4632bd2026d..458dc6eb5a68 100644
+--- a/net/ipv4/devinet.c
++++ b/net/ipv4/devinet.c
+@@ -614,12 +614,15 @@ struct in_ifaddr *inet_ifa_byprefix(struct in_device *in_dev, __be32 prefix,
+ 	return NULL;
+ }
+ 
+-static int ip_mc_config(struct sock *sk, bool join, const struct in_ifaddr *ifa)
++static int ip_mc_autojoin_config(struct net *net, bool join,
++				 const struct in_ifaddr *ifa)
+ {
++#if defined(CONFIG_IP_MULTICAST)
+ 	struct ip_mreqn mreq = {
+ 		.imr_multiaddr.s_addr = ifa->ifa_address,
+ 		.imr_ifindex = ifa->ifa_dev->dev->ifindex,
+ 	};
++	struct sock *sk = net->ipv4.mc_autojoin_sk;
+ 	int ret;
+ 
+ 	ASSERT_RTNL();
+@@ -632,6 +635,9 @@ static int ip_mc_config(struct sock *sk, bool join, const struct in_ifaddr *ifa)
+ 	release_sock(sk);
+ 
+ 	return ret;
++#else
++	return -EOPNOTSUPP;
++#endif
+ }
+ 
+ static int inet_rtm_deladdr(struct sk_buff *skb, struct nlmsghdr *nlh,
+@@ -675,7 +681,7 @@ static int inet_rtm_deladdr(struct sk_buff *skb, struct nlmsghdr *nlh,
+ 			continue;
+ 
+ 		if (ipv4_is_multicast(ifa->ifa_address))
+-			ip_mc_config(net->ipv4.mc_autojoin_sk, false, ifa);
++			ip_mc_autojoin_config(net, false, ifa);
+ 		__inet_del_ifa(in_dev, ifap, 1, nlh, NETLINK_CB(skb).portid);
+ 		return 0;
+ 	}
+@@ -940,8 +946,7 @@ static int inet_rtm_newaddr(struct sk_buff *skb, struct nlmsghdr *nlh,
+ 		 */
+ 		set_ifa_lifetime(ifa, valid_lft, prefered_lft);
+ 		if (ifa->ifa_flags & IFA_F_MCAUTOJOIN) {
+-			int ret = ip_mc_config(net->ipv4.mc_autojoin_sk,
+-					       true, ifa);
++			int ret = ip_mc_autojoin_config(net, true, ifa);
+ 
+ 			if (ret < 0) {
+ 				inet_free_ifa(ifa);
+diff --git a/net/ipv6/icmp.c b/net/ipv6/icmp.c
+index ef408a5090a2..c9504ec6a8d8 100644
+--- a/net/ipv6/icmp.c
++++ b/net/ipv6/icmp.c
+@@ -229,6 +229,25 @@ static bool icmpv6_xrlim_allow(struct sock *sk, u8 type,
+ 	return res;
+ }
+ 
++static bool icmpv6_rt_has_prefsrc(struct sock *sk, u8 type,
++				  struct flowi6 *fl6)
++{
++	struct net *net = sock_net(sk);
++	struct dst_entry *dst;
++	bool res = false;
++
++	dst = ip6_route_output(net, sk, fl6);
++	if (!dst->error) {
++		struct rt6_info *rt = (struct rt6_info *)dst;
++		struct in6_addr prefsrc;
++
++		rt6_get_prefsrc(rt, &prefsrc);
++		res = !ipv6_addr_any(&prefsrc);
++	}
++	dst_release(dst);
++	return res;
++}
++
+ /*
+  *	an inline helper for the "simple" if statement below
+  *	checks if parameter problem report is caused by an
+@@ -527,7 +546,7 @@ static void icmp6_send(struct sk_buff *skb, u8 type, u8 code, __u32 info,
+ 		saddr = force_saddr;
+ 	if (saddr) {
+ 		fl6.saddr = *saddr;
+-	} else {
++	} else if (!icmpv6_rt_has_prefsrc(sk, type, &fl6)) {
+ 		/* select a more meaningful saddr from input if */
+ 		struct net_device *in_netdev;
+ 
+diff --git a/net/l2tp/l2tp_netlink.c b/net/l2tp/l2tp_netlink.c
+index f5a9bdc4980c..ebb381c3f1b9 100644
+--- a/net/l2tp/l2tp_netlink.c
++++ b/net/l2tp/l2tp_netlink.c
+@@ -920,51 +920,51 @@ static const struct genl_ops l2tp_nl_ops[] = {
+ 		.cmd = L2TP_CMD_TUNNEL_CREATE,
+ 		.validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
+ 		.doit = l2tp_nl_cmd_tunnel_create,
+-		.flags = GENL_ADMIN_PERM,
++		.flags = GENL_UNS_ADMIN_PERM,
+ 	},
+ 	{
+ 		.cmd = L2TP_CMD_TUNNEL_DELETE,
+ 		.validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
+ 		.doit = l2tp_nl_cmd_tunnel_delete,
+-		.flags = GENL_ADMIN_PERM,
++		.flags = GENL_UNS_ADMIN_PERM,
+ 	},
+ 	{
+ 		.cmd = L2TP_CMD_TUNNEL_MODIFY,
+ 		.validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
+ 		.doit = l2tp_nl_cmd_tunnel_modify,
+-		.flags = GENL_ADMIN_PERM,
++		.flags = GENL_UNS_ADMIN_PERM,
+ 	},
+ 	{
+ 		.cmd = L2TP_CMD_TUNNEL_GET,
+ 		.validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
+ 		.doit = l2tp_nl_cmd_tunnel_get,
+ 		.dumpit = l2tp_nl_cmd_tunnel_dump,
+-		.flags = GENL_ADMIN_PERM,
++		.flags = GENL_UNS_ADMIN_PERM,
+ 	},
+ 	{
+ 		.cmd = L2TP_CMD_SESSION_CREATE,
+ 		.validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
+ 		.doit = l2tp_nl_cmd_session_create,
+-		.flags = GENL_ADMIN_PERM,
++		.flags = GENL_UNS_ADMIN_PERM,
+ 	},
+ 	{
+ 		.cmd = L2TP_CMD_SESSION_DELETE,
+ 		.validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
+ 		.doit = l2tp_nl_cmd_session_delete,
+-		.flags = GENL_ADMIN_PERM,
++		.flags = GENL_UNS_ADMIN_PERM,
+ 	},
+ 	{
+ 		.cmd = L2TP_CMD_SESSION_MODIFY,
+ 		.validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
+ 		.doit = l2tp_nl_cmd_session_modify,
+-		.flags = GENL_ADMIN_PERM,
++		.flags = GENL_UNS_ADMIN_PERM,
+ 	},
+ 	{
+ 		.cmd = L2TP_CMD_SESSION_GET,
+ 		.validate = GENL_DONT_VALIDATE_STRICT | GENL_DONT_VALIDATE_DUMP,
+ 		.doit = l2tp_nl_cmd_session_get,
+ 		.dumpit = l2tp_nl_cmd_session_dump,
+-		.flags = GENL_ADMIN_PERM,
++		.flags = GENL_UNS_ADMIN_PERM,
+ 	},
+ };
+ 
+diff --git a/net/mac80211/main.c b/net/mac80211/main.c
+index 4c2b5ba3ac09..a14aef11ffb8 100644
+--- a/net/mac80211/main.c
++++ b/net/mac80211/main.c
+@@ -1051,7 +1051,7 @@ int ieee80211_register_hw(struct ieee80211_hw *hw)
+ 		local->hw.wiphy->signal_type = CFG80211_SIGNAL_TYPE_UNSPEC;
+ 		if (hw->max_signal <= 0) {
+ 			result = -EINVAL;
+-			goto fail_wiphy_register;
++			goto fail_workqueue;
+ 		}
+ 	}
+ 
+@@ -1113,7 +1113,7 @@ int ieee80211_register_hw(struct ieee80211_hw *hw)
+ 
+ 	result = ieee80211_init_cipher_suites(local);
+ 	if (result < 0)
+-		goto fail_wiphy_register;
++		goto fail_workqueue;
+ 
+ 	if (!local->ops->remain_on_channel)
+ 		local->hw.wiphy->max_remain_on_channel_duration = 5000;
+@@ -1139,10 +1139,6 @@ int ieee80211_register_hw(struct ieee80211_hw *hw)
+ 
+ 	local->hw.wiphy->max_num_csa_counters = IEEE80211_MAX_CSA_COUNTERS_NUM;
+ 
+-	result = wiphy_register(local->hw.wiphy);
+-	if (result < 0)
+-		goto fail_wiphy_register;
+-
+ 	/*
+ 	 * We use the number of queues for feature tests (QoS, HT) internally
+ 	 * so restrict them appropriately.
+@@ -1198,9 +1194,9 @@ int ieee80211_register_hw(struct ieee80211_hw *hw)
+ 		goto fail_flows;
+ 
+ 	rtnl_lock();
+-
+ 	result = ieee80211_init_rate_ctrl_alg(local,
+ 					      hw->rate_control_algorithm);
++	rtnl_unlock();
+ 	if (result < 0) {
+ 		wiphy_debug(local->hw.wiphy,
+ 			    "Failed to initialize rate control algorithm\n");
+@@ -1254,6 +1250,12 @@ int ieee80211_register_hw(struct ieee80211_hw *hw)
+ 		local->sband_allocated |= BIT(band);
+ 	}
+ 
++	result = wiphy_register(local->hw.wiphy);
++	if (result < 0)
++		goto fail_wiphy_register;
++
++	rtnl_lock();
++
+ 	/* add one default STA interface if supported */
+ 	if (local->hw.wiphy->interface_modes & BIT(NL80211_IFTYPE_STATION) &&
+ 	    !ieee80211_hw_check(hw, NO_AUTO_VIF)) {
+@@ -1293,17 +1295,17 @@ int ieee80211_register_hw(struct ieee80211_hw *hw)
+ #if defined(CONFIG_INET) || defined(CONFIG_IPV6)
+  fail_ifa:
+ #endif
++	wiphy_unregister(local->hw.wiphy);
++ fail_wiphy_register:
+ 	rtnl_lock();
+ 	rate_control_deinitialize(local);
+ 	ieee80211_remove_interfaces(local);
+- fail_rate:
+ 	rtnl_unlock();
++ fail_rate:
+  fail_flows:
+ 	ieee80211_led_exit(local);
+ 	destroy_workqueue(local->workqueue);
+  fail_workqueue:
+-	wiphy_unregister(local->hw.wiphy);
+- fail_wiphy_register:
+ 	if (local->wiphy_ciphers_allocated)
+ 		kfree(local->hw.wiphy->cipher_suites);
+ 	kfree(local->int_scan_req);
+@@ -1353,8 +1355,8 @@ void ieee80211_unregister_hw(struct ieee80211_hw *hw)
+ 	skb_queue_purge(&local->skb_queue_unreliable);
+ 	skb_queue_purge(&local->skb_queue_tdls_chsw);
+ 
+-	destroy_workqueue(local->workqueue);
+ 	wiphy_unregister(local->hw.wiphy);
++	destroy_workqueue(local->workqueue);
+ 	ieee80211_led_exit(local);
+ 	kfree(local->int_scan_req);
+ }
+diff --git a/net/qrtr/qrtr.c b/net/qrtr/qrtr.c
+index 5a8e42ad1504..b7b854621c26 100644
+--- a/net/qrtr/qrtr.c
++++ b/net/qrtr/qrtr.c
+@@ -907,20 +907,21 @@ static int qrtr_sendmsg(struct socket *sock, struct msghdr *msg, size_t len)
+ 
+ 	node = NULL;
+ 	if (addr->sq_node == QRTR_NODE_BCAST) {
+-		enqueue_fn = qrtr_bcast_enqueue;
+-		if (addr->sq_port != QRTR_PORT_CTRL) {
++		if (addr->sq_port != QRTR_PORT_CTRL &&
++		    qrtr_local_nid != QRTR_NODE_BCAST) {
+ 			release_sock(sk);
+ 			return -ENOTCONN;
+ 		}
++		enqueue_fn = qrtr_bcast_enqueue;
+ 	} else if (addr->sq_node == ipc->us.sq_node) {
+ 		enqueue_fn = qrtr_local_enqueue;
+ 	} else {
+-		enqueue_fn = qrtr_node_enqueue;
+ 		node = qrtr_node_lookup(addr->sq_node);
+ 		if (!node) {
+ 			release_sock(sk);
+ 			return -ECONNRESET;
+ 		}
++		enqueue_fn = qrtr_node_enqueue;
+ 	}
+ 
+ 	plen = (len + 3) & ~3;
+diff --git a/net/wireless/nl80211.c b/net/wireless/nl80211.c
+index f0af23c1634a..9f4bce542d87 100644
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -619,10 +619,8 @@ const struct nla_policy nl80211_policy[NUM_NL80211_ATTR] = {
+ 	[NL80211_ATTR_HE_CAPABILITY] = { .type = NLA_BINARY,
+ 					 .len = NL80211_HE_MAX_CAPABILITY_LEN },
+ 
+-	[NL80211_ATTR_FTM_RESPONDER] = {
+-		.type = NLA_NESTED,
+-		.validation_data = nl80211_ftm_responder_policy,
+-	},
++	[NL80211_ATTR_FTM_RESPONDER] =
++		NLA_POLICY_NESTED(nl80211_ftm_responder_policy),
+ 	[NL80211_ATTR_TIMEOUT] = NLA_POLICY_MIN(NLA_U32, 1),
+ 	[NL80211_ATTR_PEER_MEASUREMENTS] =
+ 		NLA_POLICY_NESTED(nl80211_pmsr_attr_policy),
+diff --git a/security/keys/proc.c b/security/keys/proc.c
+index 415f3f1c2da0..d0cde6685627 100644
+--- a/security/keys/proc.c
++++ b/security/keys/proc.c
+@@ -139,6 +139,8 @@ static void *proc_keys_next(struct seq_file *p, void *v, loff_t *_pos)
+ 	n = key_serial_next(p, v);
+ 	if (n)
+ 		*_pos = key_node_serial(n);
++	else
++		(*_pos)++;
+ 	return n;
+ }
+ 
+diff --git a/sound/hda/Kconfig b/sound/hda/Kconfig
+index 4ca6b09056f3..3bc9224d5e4f 100644
+--- a/sound/hda/Kconfig
++++ b/sound/hda/Kconfig
+@@ -21,16 +21,17 @@ config SND_HDA_EXT_CORE
+        select SND_HDA_CORE
+ 
+ config SND_HDA_PREALLOC_SIZE
+-	int "Pre-allocated buffer size for HD-audio driver" if !SND_DMA_SGBUF
++	int "Pre-allocated buffer size for HD-audio driver"
+ 	range 0 32768
+-	default 0 if SND_DMA_SGBUF
++	default 2048 if SND_DMA_SGBUF
+ 	default 64 if !SND_DMA_SGBUF
+ 	help
+ 	  Specifies the default pre-allocated buffer-size in kB for the
+ 	  HD-audio driver.  A larger buffer (e.g. 2048) is preferred
+ 	  for systems using PulseAudio.  The default 64 is chosen just
+ 	  for compatibility reasons.
+-	  On x86 systems, the default is zero as we need no preallocation.
++	  On x86 systems, the default is 2048 as a reasonable value for
++	  most of modern systems.
+ 
+ 	  Note that the pre-allocation size can be changed dynamically
+ 	  via a proc file (/proc/asound/card*/pcm*/sub*/prealloc), too.
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index f57716d48557..02b9830d4b5f 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -7253,6 +7253,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x16e3, "ASUS UX50", ALC269_FIXUP_STEREO_DMIC),
+ 	SND_PCI_QUIRK(0x1043, 0x17d1, "ASUS UX431FL", ALC294_FIXUP_ASUS_DUAL_SPK),
+ 	SND_PCI_QUIRK(0x1043, 0x18b1, "Asus MJ401TA", ALC256_FIXUP_ASUS_HEADSET_MIC),
++	SND_PCI_QUIRK(0x1043, 0x18f1, "Asus FX505DT", ALC256_FIXUP_ASUS_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x19ce, "ASUS B9450FA", ALC294_FIXUP_ASUS_HPE),
+ 	SND_PCI_QUIRK(0x1043, 0x1a13, "Asus G73Jw", ALC269_FIXUP_ASUS_G73JW),
+ 	SND_PCI_QUIRK(0x1043, 0x1a30, "ASUS X705UD", ALC256_FIXUP_ASUS_MIC),
+diff --git a/sound/soc/intel/atom/sst-atom-controls.c b/sound/soc/intel/atom/sst-atom-controls.c
+index baef461a99f1..f883c9340eee 100644
+--- a/sound/soc/intel/atom/sst-atom-controls.c
++++ b/sound/soc/intel/atom/sst-atom-controls.c
+@@ -1333,7 +1333,7 @@ int sst_send_pipe_gains(struct snd_soc_dai *dai, int stream, int mute)
+ 				dai->capture_widget->name);
+ 		w = dai->capture_widget;
+ 		snd_soc_dapm_widget_for_each_source_path(w, p) {
+-			if (p->connected && !p->connected(w, p->sink))
++			if (p->connected && !p->connected(w, p->source))
+ 				continue;
+ 
+ 			if (p->connect &&  p->source->power &&
+diff --git a/sound/soc/intel/atom/sst/sst_pci.c b/sound/soc/intel/atom/sst/sst_pci.c
+index d952719bc098..5862fe968083 100644
+--- a/sound/soc/intel/atom/sst/sst_pci.c
++++ b/sound/soc/intel/atom/sst/sst_pci.c
+@@ -99,7 +99,7 @@ static int sst_platform_get_resources(struct intel_sst_drv *ctx)
+ 	dev_dbg(ctx->dev, "DRAM Ptr %p\n", ctx->dram);
+ do_release_regions:
+ 	pci_release_regions(pci);
+-	return 0;
++	return ret;
+ }
+ 
+ /*
+diff --git a/sound/usb/mixer.c b/sound/usb/mixer.c
+index 81b2db0edd5f..7e2e1fc5b9f0 100644
+--- a/sound/usb/mixer.c
++++ b/sound/usb/mixer.c
+@@ -1446,7 +1446,7 @@ error:
+ 		usb_audio_err(chip,
+ 			"cannot get connectors status: req = %#x, wValue = %#x, wIndex = %#x, type = %d\n",
+ 			UAC_GET_CUR, validx, idx, cval->val_type);
+-		return ret;
++		return filter_error(cval, ret);
+ 	}
+ 
+ 	ucontrol->value.integer.value[0] = val;
+@@ -1750,11 +1750,15 @@ static void get_connector_control_name(struct usb_mixer_interface *mixer,
+ 
+ /* Build a mixer control for a UAC connector control (jack-detect) */
+ static void build_connector_control(struct usb_mixer_interface *mixer,
++				    const struct usbmix_name_map *imap,
+ 				    struct usb_audio_term *term, bool is_input)
+ {
+ 	struct snd_kcontrol *kctl;
+ 	struct usb_mixer_elem_info *cval;
+ 
++	if (check_ignored_ctl(find_map(imap, term->id, 0)))
++		return;
++
+ 	cval = kzalloc(sizeof(*cval), GFP_KERNEL);
+ 	if (!cval)
+ 		return;
+@@ -2088,8 +2092,9 @@ static int parse_audio_input_terminal(struct mixer_build *state, int unitid,
+ 	check_input_term(state, term_id, &iterm);
+ 
+ 	/* Check for jack detection. */
+-	if (uac_v2v3_control_is_readable(bmctls, control))
+-		build_connector_control(state->mixer, &iterm, true);
++	if ((iterm.type & 0xff00) != 0x0100 &&
++	    uac_v2v3_control_is_readable(bmctls, control))
++		build_connector_control(state->mixer, state->map, &iterm, true);
+ 
+ 	return 0;
+ }
+@@ -3050,13 +3055,13 @@ static int snd_usb_mixer_controls_badd(struct usb_mixer_interface *mixer,
+ 		memset(&iterm, 0, sizeof(iterm));
+ 		iterm.id = UAC3_BADD_IT_ID4;
+ 		iterm.type = UAC_BIDIR_TERMINAL_HEADSET;
+-		build_connector_control(mixer, &iterm, true);
++		build_connector_control(mixer, map->map, &iterm, true);
+ 
+ 		/* Output Term - Insertion control */
+ 		memset(&oterm, 0, sizeof(oterm));
+ 		oterm.id = UAC3_BADD_OT_ID3;
+ 		oterm.type = UAC_BIDIR_TERMINAL_HEADSET;
+-		build_connector_control(mixer, &oterm, false);
++		build_connector_control(mixer, map->map, &oterm, false);
+ 	}
+ 
+ 	return 0;
+@@ -3085,7 +3090,7 @@ static int snd_usb_mixer_controls(struct usb_mixer_interface *mixer)
+ 		if (map->id == state.chip->usb_id) {
+ 			state.map = map->map;
+ 			state.selector_map = map->selector_map;
+-			mixer->ignore_ctl_error = map->ignore_ctl_error;
++			mixer->ignore_ctl_error |= map->ignore_ctl_error;
+ 			break;
+ 		}
+ 	}
+@@ -3128,10 +3133,11 @@ static int snd_usb_mixer_controls(struct usb_mixer_interface *mixer)
+ 			if (err < 0 && err != -EINVAL)
+ 				return err;
+ 
+-			if (uac_v2v3_control_is_readable(le16_to_cpu(desc->bmControls),
++			if ((state.oterm.type & 0xff00) != 0x0100 &&
++			    uac_v2v3_control_is_readable(le16_to_cpu(desc->bmControls),
+ 							 UAC2_TE_CONNECTOR)) {
+-				build_connector_control(state.mixer, &state.oterm,
+-							false);
++				build_connector_control(state.mixer, state.map,
++							&state.oterm, false);
+ 			}
+ 		} else {  /* UAC_VERSION_3 */
+ 			struct uac3_output_terminal_descriptor *desc = p;
+@@ -3153,10 +3159,11 @@ static int snd_usb_mixer_controls(struct usb_mixer_interface *mixer)
+ 			if (err < 0 && err != -EINVAL)
+ 				return err;
+ 
+-			if (uac_v2v3_control_is_readable(le32_to_cpu(desc->bmControls),
++			if ((state.oterm.type & 0xff00) != 0x0100 &&
++			    uac_v2v3_control_is_readable(le32_to_cpu(desc->bmControls),
+ 							 UAC3_TE_INSERTION)) {
+-				build_connector_control(state.mixer, &state.oterm,
+-							false);
++				build_connector_control(state.mixer, state.map,
++							&state.oterm, false);
+ 			}
+ 		}
+ 	}
+diff --git a/sound/usb/mixer_maps.c b/sound/usb/mixer_maps.c
+index 72b575c34860..b4e77000f441 100644
+--- a/sound/usb/mixer_maps.c
++++ b/sound/usb/mixer_maps.c
+@@ -360,9 +360,11 @@ static const struct usbmix_name_map corsair_virtuoso_map[] = {
+ };
+ 
+ /* Some mobos shipped with a dummy HD-audio show the invalid GET_MIN/GET_MAX
+- * response for Input Gain Pad (id=19, control=12).  Skip it.
++ * response for Input Gain Pad (id=19, control=12) and the connector status
++ * for SPDIF terminal (id=18).  Skip them.
+  */
+ static const struct usbmix_name_map asus_rog_map[] = {
++	{ 18, NULL }, /* OT, connector control */
+ 	{ 19, NULL, 12 }, /* FU, Input Gain Pad */
+ 	{}
+ };
+diff --git a/tools/perf/builtin-report.c b/tools/perf/builtin-report.c
+index 72a12b69f120..f480969e9a01 100644
+--- a/tools/perf/builtin-report.c
++++ b/tools/perf/builtin-report.c
+@@ -185,24 +185,23 @@ static int hist_iter__branch_callback(struct hist_entry_iter *iter,
+ {
+ 	struct hist_entry *he = iter->he;
+ 	struct report *rep = arg;
+-	struct branch_info *bi;
++	struct branch_info *bi = he->branch_info;
+ 	struct perf_sample *sample = iter->sample;
+ 	struct evsel *evsel = iter->evsel;
+ 	int err;
+ 
++	branch_type_count(&rep->brtype_stat, &bi->flags,
++			  bi->from.addr, bi->to.addr);
++
+ 	if (!ui__has_annotation() && !rep->symbol_ipc)
+ 		return 0;
+ 
+-	bi = he->branch_info;
+ 	err = addr_map_symbol__inc_samples(&bi->from, sample, evsel);
+ 	if (err)
+ 		goto out;
+ 
+ 	err = addr_map_symbol__inc_samples(&bi->to, sample, evsel);
+ 
+-	branch_type_count(&rep->brtype_stat, &bi->flags,
+-			  bi->from.addr, bi->to.addr);
+-
+ out:
+ 	return err;
+ }


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [gentoo-commits] proj/linux-patches:5.6 commit in: /
@ 2020-04-23 11:56 Mike Pagano
  0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2020-04-23 11:56 UTC (permalink / raw
  To: gentoo-commits

commit:     77ac4ddfeee00a9bc434886268f66c421d281d82
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu Apr 23 11:56:06 2020 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu Apr 23 11:56:06 2020 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=77ac4ddf

Linux patch 5.6.7

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README            |    4 +
 1006_linux-5.6.7.patch | 6688 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 6692 insertions(+)

diff --git a/0000_README b/0000_README
index 073a921..8000cff 100644
--- a/0000_README
+++ b/0000_README
@@ -67,6 +67,10 @@ Patch:  1005_linux-5.6.6.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.6.6
 
+Patch:  1006_linux-5.6.7.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.6.7
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1006_linux-5.6.7.patch b/1006_linux-5.6.7.patch
new file mode 100644
index 0000000..4c0dfa8
--- /dev/null
+++ b/1006_linux-5.6.7.patch
@@ -0,0 +1,6688 @@
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index c07815d230bc..6ba631cc5a56 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -2795,7 +2795,7 @@
+ 			<name>,<region-number>[,<base>,<size>,<buswidth>,<altbuswidth>]
+ 
+ 	mtdparts=	[MTD]
+-			See drivers/mtd/cmdlinepart.c.
++			See drivers/mtd/parsers/cmdlinepart.c
+ 
+ 	multitce=off	[PPC]  This parameter disables the use of the pSeries
+ 			firmware feature for updating multiple TCE entries
+diff --git a/Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt b/Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt
+index b739f92da58e..1f90eb39870b 100644
+--- a/Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt
++++ b/Documentation/devicetree/bindings/pci/nvidia,tegra194-pcie.txt
+@@ -118,7 +118,7 @@ Tegra194:
+ --------
+ 
+ 	pcie@14180000 {
+-		compatible = "nvidia,tegra194-pcie", "snps,dw-pcie";
++		compatible = "nvidia,tegra194-pcie";
+ 		power-domains = <&bpmp TEGRA194_POWER_DOMAIN_PCIEX8B>;
+ 		reg = <0x00 0x14180000 0x0 0x00020000   /* appl registers (128K)      */
+ 		       0x00 0x38000000 0x0 0x00040000   /* configuration space (256K) */
+diff --git a/Documentation/devicetree/bindings/thermal/qcom-tsens.yaml b/Documentation/devicetree/bindings/thermal/qcom-tsens.yaml
+index eef13b9446a8..a4df53228122 100644
+--- a/Documentation/devicetree/bindings/thermal/qcom-tsens.yaml
++++ b/Documentation/devicetree/bindings/thermal/qcom-tsens.yaml
+@@ -53,13 +53,12 @@ properties:
+     description:
+       Reference to an nvmem node for the calibration data
+ 
+-  nvmem-cells-names:
++  nvmem-cell-names:
+     minItems: 1
+     maxItems: 2
+     items:
+-      - enum:
+-        - caldata
+-        - calsel
++      - const: calib
++      - const: calib_sel
+ 
+   "#qcom,sensors":
+     allOf:
+@@ -125,7 +124,7 @@ examples:
+                  <0x4a8000 0x1000>; /* SROT */
+ 
+            nvmem-cells = <&tsens_caldata>, <&tsens_calsel>;
+-           nvmem-cell-names = "caldata", "calsel";
++           nvmem-cell-names = "calib", "calib_sel";
+ 
+            interrupts = <GIC_SPI 184 IRQ_TYPE_LEVEL_HIGH>;
+            interrupt-names = "uplow";
+diff --git a/Makefile b/Makefile
+index af76c00de7f6..b64df959e5d7 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 6
+-SUBLEVEL = 6
++SUBLEVEL = 7
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+ 
+diff --git a/arch/arm/boot/dts/imx6qdl.dtsi b/arch/arm/boot/dts/imx6qdl.dtsi
+index e6b4b8525f98..bc488df31511 100644
+--- a/arch/arm/boot/dts/imx6qdl.dtsi
++++ b/arch/arm/boot/dts/imx6qdl.dtsi
+@@ -1039,9 +1039,8 @@
+ 				compatible = "fsl,imx6q-fec";
+ 				reg = <0x02188000 0x4000>;
+ 				interrupt-names = "int0", "pps";
+-				interrupts-extended =
+-					<&intc 0 118 IRQ_TYPE_LEVEL_HIGH>,
+-					<&intc 0 119 IRQ_TYPE_LEVEL_HIGH>;
++				interrupts = <0 118 IRQ_TYPE_LEVEL_HIGH>,
++					     <0 119 IRQ_TYPE_LEVEL_HIGH>;
+ 				clocks = <&clks IMX6QDL_CLK_ENET>,
+ 					 <&clks IMX6QDL_CLK_ENET>,
+ 					 <&clks IMX6QDL_CLK_ENET_REF>;
+diff --git a/arch/arm/boot/dts/imx6qp.dtsi b/arch/arm/boot/dts/imx6qp.dtsi
+index 5f51f8e5c1fa..d91f92f944c5 100644
+--- a/arch/arm/boot/dts/imx6qp.dtsi
++++ b/arch/arm/boot/dts/imx6qp.dtsi
+@@ -77,7 +77,6 @@
+ };
+ 
+ &fec {
+-	/delete-property/interrupts-extended;
+ 	interrupts = <0 118 IRQ_TYPE_LEVEL_HIGH>,
+ 		     <0 119 IRQ_TYPE_LEVEL_HIGH>;
+ };
+diff --git a/arch/arm/boot/dts/rk3188-bqedison2qc.dts b/arch/arm/boot/dts/rk3188-bqedison2qc.dts
+index ad1afd403052..66a0ff196eb1 100644
+--- a/arch/arm/boot/dts/rk3188-bqedison2qc.dts
++++ b/arch/arm/boot/dts/rk3188-bqedison2qc.dts
+@@ -58,20 +58,25 @@
+ 
+ 	lvds-encoder {
+ 		compatible = "ti,sn75lvds83", "lvds-encoder";
+-		#address-cells = <1>;
+-		#size-cells = <0>;
+ 
+-		port@0 {
+-			reg = <0>;
+-			lvds_in_vop0: endpoint {
+-				remote-endpoint = <&vop0_out_lvds>;
++		ports {
++			#address-cells = <1>;
++			#size-cells = <0>;
++
++			port@0 {
++				reg = <0>;
++
++				lvds_in_vop0: endpoint {
++					remote-endpoint = <&vop0_out_lvds>;
++				};
+ 			};
+-		};
+ 
+-		port@1 {
+-			reg = <1>;
+-			lvds_out_panel: endpoint {
+-				remote-endpoint = <&panel_in_lvds>;
++			port@1 {
++				reg = <1>;
++
++				lvds_out_panel: endpoint {
++					remote-endpoint = <&panel_in_lvds>;
++				};
+ 			};
+ 		};
+ 	};
+@@ -465,7 +470,7 @@
+ 	non-removable;
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&sd1_clk>, <&sd1_cmd>, <&sd1_bus4>;
+-	vmmcq-supply = <&vccio_wl>;
++	vqmmc-supply = <&vccio_wl>;
+ 	#address-cells = <1>;
+ 	#size-cells = <0>;
+ 	status = "okay";
+diff --git a/arch/arm/boot/dts/sun8i-a83t.dtsi b/arch/arm/boot/dts/sun8i-a83t.dtsi
+index e7b9bef1be6b..bd1287eca253 100644
+--- a/arch/arm/boot/dts/sun8i-a83t.dtsi
++++ b/arch/arm/boot/dts/sun8i-a83t.dtsi
+@@ -314,7 +314,7 @@
+ 
+ 		display_clocks: clock@1000000 {
+ 			compatible = "allwinner,sun8i-a83t-de2-clk";
+-			reg = <0x01000000 0x100000>;
++			reg = <0x01000000 0x10000>;
+ 			clocks = <&ccu CLK_BUS_DE>,
+ 				 <&ccu CLK_PLL_DE>;
+ 			clock-names = "bus",
+diff --git a/arch/arm/boot/dts/sun8i-r40.dtsi b/arch/arm/boot/dts/sun8i-r40.dtsi
+index a9d5d6ddbd71..a3867491bb46 100644
+--- a/arch/arm/boot/dts/sun8i-r40.dtsi
++++ b/arch/arm/boot/dts/sun8i-r40.dtsi
+@@ -119,7 +119,7 @@
+ 		display_clocks: clock@1000000 {
+ 			compatible = "allwinner,sun8i-r40-de2-clk",
+ 				     "allwinner,sun8i-h3-de2-clk";
+-			reg = <0x01000000 0x100000>;
++			reg = <0x01000000 0x10000>;
+ 			clocks = <&ccu CLK_BUS_DE>,
+ 				 <&ccu CLK_DE>;
+ 			clock-names = "bus",
+diff --git a/arch/arm/boot/dts/sun8i-v3s.dtsi b/arch/arm/boot/dts/sun8i-v3s.dtsi
+index 81ea50838cd5..e5312869c0d2 100644
+--- a/arch/arm/boot/dts/sun8i-v3s.dtsi
++++ b/arch/arm/boot/dts/sun8i-v3s.dtsi
+@@ -105,7 +105,7 @@
+ 
+ 		display_clocks: clock@1000000 {
+ 			compatible = "allwinner,sun8i-v3s-de2-clk";
+-			reg = <0x01000000 0x100000>;
++			reg = <0x01000000 0x10000>;
+ 			clocks = <&ccu CLK_BUS_DE>,
+ 				 <&ccu CLK_DE>;
+ 			clock-names = "bus",
+diff --git a/arch/arm/boot/dts/sunxi-h3-h5.dtsi b/arch/arm/boot/dts/sunxi-h3-h5.dtsi
+index 5e9c3060aa08..799f32bafd80 100644
+--- a/arch/arm/boot/dts/sunxi-h3-h5.dtsi
++++ b/arch/arm/boot/dts/sunxi-h3-h5.dtsi
+@@ -114,7 +114,7 @@
+ 
+ 		display_clocks: clock@1000000 {
+ 			/* compatible is in per SoC .dtsi file */
+-			reg = <0x01000000 0x100000>;
++			reg = <0x01000000 0x10000>;
+ 			clocks = <&ccu CLK_BUS_DE>,
+ 				 <&ccu CLK_DE>;
+ 			clock-names = "bus",
+diff --git a/arch/arm/net/bpf_jit_32.c b/arch/arm/net/bpf_jit_32.c
+index cc29869d12a3..bf85d6db4931 100644
+--- a/arch/arm/net/bpf_jit_32.c
++++ b/arch/arm/net/bpf_jit_32.c
+@@ -929,7 +929,11 @@ static inline void emit_a32_rsh_i64(const s8 dst[],
+ 	rd = arm_bpf_get_reg64(dst, tmp, ctx);
+ 
+ 	/* Do LSR operation */
+-	if (val < 32) {
++	if (val == 0) {
++		/* An immediate value of 0 encodes a shift amount of 32
++		 * for LSR. To shift by 0, don't do anything.
++		 */
++	} else if (val < 32) {
+ 		emit(ARM_MOV_SI(tmp2[1], rd[1], SRTYPE_LSR, val), ctx);
+ 		emit(ARM_ORR_SI(rd[1], tmp2[1], rd[0], SRTYPE_ASL, 32 - val), ctx);
+ 		emit(ARM_MOV_SI(rd[0], rd[0], SRTYPE_LSR, val), ctx);
+@@ -955,7 +959,11 @@ static inline void emit_a32_arsh_i64(const s8 dst[],
+ 	rd = arm_bpf_get_reg64(dst, tmp, ctx);
+ 
+ 	/* Do ARSH operation */
+-	if (val < 32) {
++	if (val == 0) {
++		/* An immediate value of 0 encodes a shift amount of 32
++		 * for ASR. To shift by 0, don't do anything.
++		 */
++	} else if (val < 32) {
+ 		emit(ARM_MOV_SI(tmp2[1], rd[1], SRTYPE_LSR, val), ctx);
+ 		emit(ARM_ORR_SI(rd[1], tmp2[1], rd[0], SRTYPE_ASL, 32 - val), ctx);
+ 		emit(ARM_MOV_SI(rd[0], rd[0], SRTYPE_ASR, val), ctx);
+@@ -992,21 +1000,35 @@ static inline void emit_a32_mul_r64(const s8 dst[], const s8 src[],
+ 	arm_bpf_put_reg32(dst_hi, rd[0], ctx);
+ }
+ 
++static bool is_ldst_imm(s16 off, const u8 size)
++{
++	s16 off_max = 0;
++
++	switch (size) {
++	case BPF_B:
++	case BPF_W:
++		off_max = 0xfff;
++		break;
++	case BPF_H:
++		off_max = 0xff;
++		break;
++	case BPF_DW:
++		/* Need to make sure off+4 does not overflow. */
++		off_max = 0xfff - 4;
++		break;
++	}
++	return -off_max <= off && off <= off_max;
++}
++
+ /* *(size *)(dst + off) = src */
+ static inline void emit_str_r(const s8 dst, const s8 src[],
+-			      s32 off, struct jit_ctx *ctx, const u8 sz){
++			      s16 off, struct jit_ctx *ctx, const u8 sz){
+ 	const s8 *tmp = bpf2a32[TMP_REG_1];
+-	s32 off_max;
+ 	s8 rd;
+ 
+ 	rd = arm_bpf_get_reg32(dst, tmp[1], ctx);
+ 
+-	if (sz == BPF_H)
+-		off_max = 0xff;
+-	else
+-		off_max = 0xfff;
+-
+-	if (off < 0 || off > off_max) {
++	if (!is_ldst_imm(off, sz)) {
+ 		emit_a32_mov_i(tmp[0], off, ctx);
+ 		emit(ARM_ADD_R(tmp[0], tmp[0], rd), ctx);
+ 		rd = tmp[0];
+@@ -1035,18 +1057,12 @@ static inline void emit_str_r(const s8 dst, const s8 src[],
+ 
+ /* dst = *(size*)(src + off) */
+ static inline void emit_ldx_r(const s8 dst[], const s8 src,
+-			      s32 off, struct jit_ctx *ctx, const u8 sz){
++			      s16 off, struct jit_ctx *ctx, const u8 sz){
+ 	const s8 *tmp = bpf2a32[TMP_REG_1];
+ 	const s8 *rd = is_stacked(dst_lo) ? tmp : dst;
+ 	s8 rm = src;
+-	s32 off_max;
+-
+-	if (sz == BPF_H)
+-		off_max = 0xff;
+-	else
+-		off_max = 0xfff;
+ 
+-	if (off < 0 || off > off_max) {
++	if (!is_ldst_imm(off, sz)) {
+ 		emit_a32_mov_i(tmp[0], off, ctx);
+ 		emit(ARM_ADD_R(tmp[0], tmp[0], src), ctx);
+ 		rm = tmp[0];
+diff --git a/arch/arm64/boot/dts/allwinner/sun50i-a64.dtsi b/arch/arm64/boot/dts/allwinner/sun50i-a64.dtsi
+index 862b47dc9dc9..baa6f08dc108 100644
+--- a/arch/arm64/boot/dts/allwinner/sun50i-a64.dtsi
++++ b/arch/arm64/boot/dts/allwinner/sun50i-a64.dtsi
+@@ -264,7 +264,7 @@
+ 
+ 			display_clocks: clock@0 {
+ 				compatible = "allwinner,sun50i-a64-de2-clk";
+-				reg = <0x0 0x100000>;
++				reg = <0x0 0x10000>;
+ 				clocks = <&ccu CLK_BUS_DE>,
+ 					 <&ccu CLK_DE>;
+ 				clock-names = "bus",
+diff --git a/arch/arm64/boot/dts/marvell/armada-3720-espressobin.dtsi b/arch/arm64/boot/dts/marvell/armada-3720-espressobin.dtsi
+index 53b8ac55a7f3..e5262dab28f5 100644
+--- a/arch/arm64/boot/dts/marvell/armada-3720-espressobin.dtsi
++++ b/arch/arm64/boot/dts/marvell/armada-3720-espressobin.dtsi
+@@ -13,6 +13,12 @@
+ #include "armada-372x.dtsi"
+ 
+ / {
++	aliases {
++		ethernet0 = &eth0;
++		serial0 = &uart0;
++		serial1 = &uart1;
++	};
++
+ 	chosen {
+ 		stdout-path = "serial0:115200n8";
+ 	};
+diff --git a/arch/arm64/boot/dts/marvell/armada-8040-clearfog-gt-8k.dts b/arch/arm64/boot/dts/marvell/armada-8040-clearfog-gt-8k.dts
+index a211a046b2f2..b90d78a5724b 100644
+--- a/arch/arm64/boot/dts/marvell/armada-8040-clearfog-gt-8k.dts
++++ b/arch/arm64/boot/dts/marvell/armada-8040-clearfog-gt-8k.dts
+@@ -367,6 +367,7 @@
+ 		pinctrl-0 = <&cp0_copper_eth_phy_reset>;
+ 		reset-gpios = <&cp0_gpio2 11 GPIO_ACTIVE_LOW>;
+ 		reset-assert-us = <10000>;
++		reset-deassert-us = <10000>;
+ 	};
+ 
+ 	switch0: switch0@4 {
+diff --git a/arch/arm64/boot/dts/marvell/armada-ap807-quad.dtsi b/arch/arm64/boot/dts/marvell/armada-ap807-quad.dtsi
+index 840466e143b4..68782f161f12 100644
+--- a/arch/arm64/boot/dts/marvell/armada-ap807-quad.dtsi
++++ b/arch/arm64/boot/dts/marvell/armada-ap807-quad.dtsi
+@@ -17,7 +17,7 @@
+ 
+ 		cpu0: cpu@0 {
+ 			device_type = "cpu";
+-			compatible = "arm,cortex-a72", "arm,armv8";
++			compatible = "arm,cortex-a72";
+ 			reg = <0x000>;
+ 			enable-method = "psci";
+ 			#cooling-cells = <2>;
+@@ -32,7 +32,7 @@
+ 		};
+ 		cpu1: cpu@1 {
+ 			device_type = "cpu";
+-			compatible = "arm,cortex-a72", "arm,armv8";
++			compatible = "arm,cortex-a72";
+ 			reg = <0x001>;
+ 			enable-method = "psci";
+ 			#cooling-cells = <2>;
+@@ -47,7 +47,7 @@
+ 		};
+ 		cpu2: cpu@100 {
+ 			device_type = "cpu";
+-			compatible = "arm,cortex-a72", "arm,armv8";
++			compatible = "arm,cortex-a72";
+ 			reg = <0x100>;
+ 			enable-method = "psci";
+ 			#cooling-cells = <2>;
+@@ -62,7 +62,7 @@
+ 		};
+ 		cpu3: cpu@101 {
+ 			device_type = "cpu";
+-			compatible = "arm,cortex-a72", "arm,armv8";
++			compatible = "arm,cortex-a72";
+ 			reg = <0x101>;
+ 			enable-method = "psci";
+ 			#cooling-cells = <2>;
+diff --git a/arch/arm64/boot/dts/nvidia/tegra194.dtsi b/arch/arm64/boot/dts/nvidia/tegra194.dtsi
+index ccac43be12ac..a8f024662e60 100644
+--- a/arch/arm64/boot/dts/nvidia/tegra194.dtsi
++++ b/arch/arm64/boot/dts/nvidia/tegra194.dtsi
+@@ -1208,7 +1208,7 @@
+ 	};
+ 
+ 	pcie@14100000 {
+-		compatible = "nvidia,tegra194-pcie", "snps,dw-pcie";
++		compatible = "nvidia,tegra194-pcie";
+ 		power-domains = <&bpmp TEGRA194_POWER_DOMAIN_PCIEX1A>;
+ 		reg = <0x00 0x14100000 0x0 0x00020000   /* appl registers (128K)      */
+ 		       0x00 0x30000000 0x0 0x00040000   /* configuration space (256K) */
+@@ -1253,7 +1253,7 @@
+ 	};
+ 
+ 	pcie@14120000 {
+-		compatible = "nvidia,tegra194-pcie", "snps,dw-pcie";
++		compatible = "nvidia,tegra194-pcie";
+ 		power-domains = <&bpmp TEGRA194_POWER_DOMAIN_PCIEX1A>;
+ 		reg = <0x00 0x14120000 0x0 0x00020000   /* appl registers (128K)      */
+ 		       0x00 0x32000000 0x0 0x00040000   /* configuration space (256K) */
+@@ -1298,7 +1298,7 @@
+ 	};
+ 
+ 	pcie@14140000 {
+-		compatible = "nvidia,tegra194-pcie", "snps,dw-pcie";
++		compatible = "nvidia,tegra194-pcie";
+ 		power-domains = <&bpmp TEGRA194_POWER_DOMAIN_PCIEX1A>;
+ 		reg = <0x00 0x14140000 0x0 0x00020000   /* appl registers (128K)      */
+ 		       0x00 0x34000000 0x0 0x00040000   /* configuration space (256K) */
+@@ -1343,7 +1343,7 @@
+ 	};
+ 
+ 	pcie@14160000 {
+-		compatible = "nvidia,tegra194-pcie", "snps,dw-pcie";
++		compatible = "nvidia,tegra194-pcie";
+ 		power-domains = <&bpmp TEGRA194_POWER_DOMAIN_PCIEX4A>;
+ 		reg = <0x00 0x14160000 0x0 0x00020000   /* appl registers (128K)      */
+ 		       0x00 0x36000000 0x0 0x00040000   /* configuration space (256K) */
+@@ -1388,7 +1388,7 @@
+ 	};
+ 
+ 	pcie@14180000 {
+-		compatible = "nvidia,tegra194-pcie", "snps,dw-pcie";
++		compatible = "nvidia,tegra194-pcie";
+ 		power-domains = <&bpmp TEGRA194_POWER_DOMAIN_PCIEX8B>;
+ 		reg = <0x00 0x14180000 0x0 0x00020000   /* appl registers (128K)      */
+ 		       0x00 0x38000000 0x0 0x00040000   /* configuration space (256K) */
+@@ -1433,7 +1433,7 @@
+ 	};
+ 
+ 	pcie@141a0000 {
+-		compatible = "nvidia,tegra194-pcie", "snps,dw-pcie";
++		compatible = "nvidia,tegra194-pcie";
+ 		power-domains = <&bpmp TEGRA194_POWER_DOMAIN_PCIEX8A>;
+ 		reg = <0x00 0x141a0000 0x0 0x00020000   /* appl registers (128K)      */
+ 		       0x00 0x3a000000 0x0 0x00040000   /* configuration space (256K) */
+@@ -1481,6 +1481,105 @@
+ 			  0x82000000 0x0  0x40000000 0x1f 0x40000000 0x0 0xc0000000>; /* non-prefetchable memory (3GB) */
+ 	};
+ 
++	pcie_ep@14160000 {
++		compatible = "nvidia,tegra194-pcie-ep", "snps,dw-pcie-ep";
++		power-domains = <&bpmp TEGRA194_POWER_DOMAIN_PCIEX4A>;
++		reg = <0x00 0x14160000 0x0 0x00020000   /* appl registers (128K)      */
++		       0x00 0x36040000 0x0 0x00040000   /* iATU_DMA reg space (256K)  */
++		       0x00 0x36080000 0x0 0x00040000   /* DBI reg space (256K)       */
++		       0x14 0x00000000 0x4 0x00000000>; /* Address Space (16G)        */
++		reg-names = "appl", "atu_dma", "dbi", "addr_space";
++
++		status = "disabled";
++
++		num-lanes = <4>;
++		num-ib-windows = <2>;
++		num-ob-windows = <8>;
++
++		clocks = <&bpmp TEGRA194_CLK_PEX0_CORE_4>;
++		clock-names = "core";
++
++		resets = <&bpmp TEGRA194_RESET_PEX0_CORE_4_APB>,
++			 <&bpmp TEGRA194_RESET_PEX0_CORE_4>;
++		reset-names = "apb", "core";
++
++		interrupts = <GIC_SPI 51 IRQ_TYPE_LEVEL_HIGH>;	/* controller interrupt */
++		interrupt-names = "intr";
++
++		nvidia,bpmp = <&bpmp 4>;
++
++		nvidia,aspm-cmrt-us = <60>;
++		nvidia,aspm-pwr-on-t-us = <20>;
++		nvidia,aspm-l0s-entrance-latency-us = <3>;
++	};
++
++	pcie_ep@14180000 {
++		compatible = "nvidia,tegra194-pcie-ep", "snps,dw-pcie-ep";
++		power-domains = <&bpmp TEGRA194_POWER_DOMAIN_PCIEX8B>;
++		reg = <0x00 0x14180000 0x0 0x00020000   /* appl registers (128K)      */
++		       0x00 0x38040000 0x0 0x00040000   /* iATU_DMA reg space (256K)  */
++		       0x00 0x38080000 0x0 0x00040000   /* DBI reg space (256K)       */
++		       0x18 0x00000000 0x4 0x00000000>; /* Address Space (16G)        */
++		reg-names = "appl", "atu_dma", "dbi", "addr_space";
++
++		status = "disabled";
++
++		num-lanes = <8>;
++		num-ib-windows = <2>;
++		num-ob-windows = <8>;
++
++		clocks = <&bpmp TEGRA194_CLK_PEX0_CORE_0>;
++		clock-names = "core";
++
++		resets = <&bpmp TEGRA194_RESET_PEX0_CORE_0_APB>,
++			 <&bpmp TEGRA194_RESET_PEX0_CORE_0>;
++		reset-names = "apb", "core";
++
++		interrupts = <GIC_SPI 72 IRQ_TYPE_LEVEL_HIGH>;	/* controller interrupt */
++		interrupt-names = "intr";
++
++		nvidia,bpmp = <&bpmp 0>;
++
++		nvidia,aspm-cmrt-us = <60>;
++		nvidia,aspm-pwr-on-t-us = <20>;
++		nvidia,aspm-l0s-entrance-latency-us = <3>;
++	};
++
++	pcie_ep@141a0000 {
++		compatible = "nvidia,tegra194-pcie-ep", "snps,dw-pcie-ep";
++		power-domains = <&bpmp TEGRA194_POWER_DOMAIN_PCIEX8A>;
++		reg = <0x00 0x141a0000 0x0 0x00020000   /* appl registers (128K)      */
++		       0x00 0x3a040000 0x0 0x00040000   /* iATU_DMA reg space (256K)  */
++		       0x00 0x3a080000 0x0 0x00040000   /* DBI reg space (256K)       */
++		       0x1c 0x00000000 0x4 0x00000000>; /* Address Space (16G)        */
++		reg-names = "appl", "atu_dma", "dbi", "addr_space";
++
++		status = "disabled";
++
++		num-lanes = <8>;
++		num-ib-windows = <2>;
++		num-ob-windows = <8>;
++
++		pinctrl-names = "default";
++		pinctrl-0 = <&clkreq_c5_bi_dir_state>;
++
++		clocks = <&bpmp TEGRA194_CLK_PEX1_CORE_5>;
++		clock-names = "core";
++
++		resets = <&bpmp TEGRA194_RESET_PEX1_CORE_5_APB>,
++			 <&bpmp TEGRA194_RESET_PEX1_CORE_5>;
++		reset-names = "apb", "core";
++
++		interrupts = <GIC_SPI 53 IRQ_TYPE_LEVEL_HIGH>;	/* controller interrupt */
++		interrupt-names = "intr";
++
++		nvidia,bpmp = <&bpmp 5>;
++
++		nvidia,aspm-cmrt-us = <60>;
++		nvidia,aspm-pwr-on-t-us = <20>;
++		nvidia,aspm-l0s-entrance-latency-us = <3>;
++	};
++
+ 	sysram@40000000 {
+ 		compatible = "nvidia,tegra194-sysram", "mmio-sram";
+ 		reg = <0x0 0x40000000 0x0 0x50000>;
+diff --git a/arch/csky/abiv1/inc/abi/entry.h b/arch/csky/abiv1/inc/abi/entry.h
+index f35a9f3315ee..5056ebb902d1 100644
+--- a/arch/csky/abiv1/inc/abi/entry.h
++++ b/arch/csky/abiv1/inc/abi/entry.h
+@@ -172,10 +172,7 @@
+ 	addi	r6, 0xe
+ 	cpwcr	r6, cpcr30
+ 
+-	lsri	r6, 28
+-	addi	r6, 2
+-	lsli	r6, 28
+-	addi	r6, 0xe
++	movi	r6, 0
+ 	cpwcr	r6, cpcr31
+ .endm
+ 
+diff --git a/arch/csky/abiv2/fpu.c b/arch/csky/abiv2/fpu.c
+index 86d187d4e5af..5acc5c2e544e 100644
+--- a/arch/csky/abiv2/fpu.c
++++ b/arch/csky/abiv2/fpu.c
+@@ -10,11 +10,6 @@
+ #define MTCR_DIST	0xC0006420
+ #define MFCR_DIST	0xC0006020
+ 
+-void __init init_fpu(void)
+-{
+-	mtcr("cr<1, 2>", 0);
+-}
+-
+ /*
+  * fpu_libc_helper() is to help libc to excute:
+  *  - mfcr %a, cr<1, 2>
+diff --git a/arch/csky/abiv2/inc/abi/entry.h b/arch/csky/abiv2/inc/abi/entry.h
+index 94a7a58765df..111973c6c713 100644
+--- a/arch/csky/abiv2/inc/abi/entry.h
++++ b/arch/csky/abiv2/inc/abi/entry.h
+@@ -230,11 +230,8 @@
+ 	addi	r6, 0x1ce
+ 	mtcr	r6, cr<30, 15> /* Set MSA0 */
+ 
+-	lsri	r6, 28
+-	addi	r6, 2
+-	lsli	r6, 28
+-	addi	r6, 0x1ce
+-	mtcr	r6, cr<31, 15> /* Set MSA1 */
++	movi    r6, 0
++	mtcr	r6, cr<31, 15> /* Clr MSA1 */
+ 
+ 	/* enable MMU */
+ 	mfcr    r6, cr18
+diff --git a/arch/csky/abiv2/inc/abi/fpu.h b/arch/csky/abiv2/inc/abi/fpu.h
+index 22ca3cf2794a..09e2700a3693 100644
+--- a/arch/csky/abiv2/inc/abi/fpu.h
++++ b/arch/csky/abiv2/inc/abi/fpu.h
+@@ -9,7 +9,8 @@
+ 
+ int fpu_libc_helper(struct pt_regs *regs);
+ void fpu_fpe(struct pt_regs *regs);
+-void __init init_fpu(void);
++
++static inline void init_fpu(void) { mtcr("cr<1, 2>", 0); }
+ 
+ void save_to_user_fp(struct user_fp *user_fp);
+ void restore_from_user_fp(struct user_fp *user_fp);
+diff --git a/arch/csky/include/asm/processor.h b/arch/csky/include/asm/processor.h
+index 21e0bd5293dd..c6bcd7f7c720 100644
+--- a/arch/csky/include/asm/processor.h
++++ b/arch/csky/include/asm/processor.h
+@@ -43,6 +43,7 @@ extern struct cpuinfo_csky cpu_data[];
+ struct thread_struct {
+ 	unsigned long  ksp;       /* kernel stack pointer */
+ 	unsigned long  sr;        /* saved status register */
++	unsigned long  trap_no;   /* saved status register */
+ 
+ 	/* FPU regs */
+ 	struct user_fp __aligned(16) user_fp;
+diff --git a/arch/csky/kernel/head.S b/arch/csky/kernel/head.S
+index 61989f9241c0..17ed9d250480 100644
+--- a/arch/csky/kernel/head.S
++++ b/arch/csky/kernel/head.S
+@@ -21,6 +21,11 @@ END(_start)
+ ENTRY(_start_smp_secondary)
+ 	SETUP_MMU
+ 
++	/* copy msa1 from CPU0 */
++	lrw     r6, secondary_msa1
++	ld.w	r6, (r6, 0)
++	mtcr	r6, cr<31, 15>
++
+ 	/* set stack point */
+ 	lrw     r6, secondary_stack
+ 	ld.w	r6, (r6, 0)
+diff --git a/arch/csky/kernel/setup.c b/arch/csky/kernel/setup.c
+index 3821e55742f4..819a9a7bf786 100644
+--- a/arch/csky/kernel/setup.c
++++ b/arch/csky/kernel/setup.c
+@@ -24,26 +24,9 @@ struct screen_info screen_info = {
+ };
+ #endif
+ 
+-phys_addr_t __init_memblock memblock_end_of_REG0(void)
+-{
+-	return (memblock.memory.regions[0].base +
+-		memblock.memory.regions[0].size);
+-}
+-
+-phys_addr_t __init_memblock memblock_start_of_REG1(void)
+-{
+-	return memblock.memory.regions[1].base;
+-}
+-
+-size_t __init_memblock memblock_size_of_REG1(void)
+-{
+-	return memblock.memory.regions[1].size;
+-}
+-
+ static void __init csky_memblock_init(void)
+ {
+ 	unsigned long zone_size[MAX_NR_ZONES];
+-	unsigned long zhole_size[MAX_NR_ZONES];
+ 	signed long size;
+ 
+ 	memblock_reserve(__pa(_stext), _end - _stext);
+@@ -54,54 +37,36 @@ static void __init csky_memblock_init(void)
+ 	memblock_dump_all();
+ 
+ 	memset(zone_size, 0, sizeof(zone_size));
+-	memset(zhole_size, 0, sizeof(zhole_size));
+ 
+ 	min_low_pfn = PFN_UP(memblock_start_of_DRAM());
+-	max_pfn	    = PFN_DOWN(memblock_end_of_DRAM());
+-
+-	max_low_pfn = PFN_UP(memblock_end_of_REG0());
+-	if (max_low_pfn == 0)
+-		max_low_pfn = max_pfn;
++	max_low_pfn = max_pfn = PFN_DOWN(memblock_end_of_DRAM());
+ 
+ 	size = max_pfn - min_low_pfn;
+ 
+-	if (memblock.memory.cnt > 1) {
+-		zone_size[ZONE_NORMAL]  =
+-			PFN_DOWN(memblock_start_of_REG1()) - min_low_pfn;
+-		zhole_size[ZONE_NORMAL] =
+-			PFN_DOWN(memblock_start_of_REG1()) - max_low_pfn;
++	if (size <= PFN_DOWN(SSEG_SIZE - PHYS_OFFSET_OFFSET))
++		zone_size[ZONE_NORMAL] = size;
++	else if (size < PFN_DOWN(LOWMEM_LIMIT - PHYS_OFFSET_OFFSET)) {
++		zone_size[ZONE_NORMAL] =
++				PFN_DOWN(SSEG_SIZE - PHYS_OFFSET_OFFSET);
++		max_low_pfn = min_low_pfn + zone_size[ZONE_NORMAL];
+ 	} else {
+-		if (size <= PFN_DOWN(LOWMEM_LIMIT - PHYS_OFFSET_OFFSET))
+-			zone_size[ZONE_NORMAL] = max_pfn - min_low_pfn;
+-		else {
+-			zone_size[ZONE_NORMAL] =
++		zone_size[ZONE_NORMAL] =
+ 				PFN_DOWN(LOWMEM_LIMIT - PHYS_OFFSET_OFFSET);
+-			max_low_pfn = min_low_pfn + zone_size[ZONE_NORMAL];
+-		}
++		max_low_pfn = min_low_pfn + zone_size[ZONE_NORMAL];
++		write_mmu_msa1(read_mmu_msa0() + SSEG_SIZE);
+ 	}
+ 
+ #ifdef CONFIG_HIGHMEM
+-	size = 0;
+-	if (memblock.memory.cnt > 1) {
+-		size = PFN_DOWN(memblock_size_of_REG1());
+-		highstart_pfn = PFN_DOWN(memblock_start_of_REG1());
+-	} else {
+-		size = max_pfn - min_low_pfn -
+-			PFN_DOWN(LOWMEM_LIMIT - PHYS_OFFSET_OFFSET);
+-		highstart_pfn =  min_low_pfn +
+-			PFN_DOWN(LOWMEM_LIMIT - PHYS_OFFSET_OFFSET);
+-	}
+-
+-	if (size > 0)
+-		zone_size[ZONE_HIGHMEM] = size;
++	zone_size[ZONE_HIGHMEM] = max_pfn - max_low_pfn;
+ 
+-	highend_pfn = max_pfn;
++	highstart_pfn = max_low_pfn;
++	highend_pfn   = max_pfn;
+ #endif
+ 	memblock_set_current_limit(PFN_PHYS(max_low_pfn));
+ 
+ 	dma_contiguous_reserve(0);
+ 
+-	free_area_init_node(0, zone_size, min_low_pfn, zhole_size);
++	free_area_init_node(0, zone_size, min_low_pfn, NULL);
+ }
+ 
+ void __init setup_arch(char **cmdline_p)
+diff --git a/arch/csky/kernel/smp.c b/arch/csky/kernel/smp.c
+index 0bb0954d5570..b5c5bc3afeb5 100644
+--- a/arch/csky/kernel/smp.c
++++ b/arch/csky/kernel/smp.c
+@@ -22,6 +22,9 @@
+ #include <asm/sections.h>
+ #include <asm/mmu_context.h>
+ #include <asm/pgalloc.h>
++#ifdef CONFIG_CPU_HAS_FPU
++#include <abi/fpu.h>
++#endif
+ 
+ struct ipi_data_struct {
+ 	unsigned long bits ____cacheline_aligned;
+@@ -156,6 +159,8 @@ volatile unsigned int secondary_hint;
+ volatile unsigned int secondary_ccr;
+ volatile unsigned int secondary_stack;
+ 
++unsigned long secondary_msa1;
++
+ int __cpu_up(unsigned int cpu, struct task_struct *tidle)
+ {
+ 	unsigned long mask = 1 << cpu;
+@@ -164,6 +169,7 @@ int __cpu_up(unsigned int cpu, struct task_struct *tidle)
+ 		(unsigned int) task_stack_page(tidle) + THREAD_SIZE - 8;
+ 	secondary_hint = mfcr("cr31");
+ 	secondary_ccr  = mfcr("cr18");
++	secondary_msa1 = read_mmu_msa1();
+ 
+ 	/*
+ 	 * Because other CPUs are in reset status, we must flush data
+diff --git a/arch/csky/kernel/traps.c b/arch/csky/kernel/traps.c
+index b057480e7463..63715cb90ee9 100644
+--- a/arch/csky/kernel/traps.c
++++ b/arch/csky/kernel/traps.c
+@@ -115,8 +115,9 @@ asmlinkage void trap_c(struct pt_regs *regs)
+ 	int sig;
+ 	unsigned long vector;
+ 	siginfo_t info;
++	struct task_struct *tsk = current;
+ 
+-	vector = (mfcr("psr") >> 16) & 0xff;
++	vector = (regs->sr >> 16) & 0xff;
+ 
+ 	switch (vector) {
+ 	case VEC_ZERODIV:
+@@ -129,6 +130,7 @@ asmlinkage void trap_c(struct pt_regs *regs)
+ 		sig = SIGTRAP;
+ 		break;
+ 	case VEC_ILLEGAL:
++		tsk->thread.trap_no = vector;
+ 		die_if_kernel("Kernel mode ILLEGAL", regs, vector);
+ #ifndef CONFIG_CPU_NO_USER_BKPT
+ 		if (*(uint16_t *)instruction_pointer(regs) != USR_BKPT)
+@@ -146,16 +148,20 @@ asmlinkage void trap_c(struct pt_regs *regs)
+ 		sig = SIGTRAP;
+ 		break;
+ 	case VEC_ACCESS:
++		tsk->thread.trap_no = vector;
+ 		return buserr(regs);
+ #ifdef CONFIG_CPU_NEED_SOFTALIGN
+ 	case VEC_ALIGN:
++		tsk->thread.trap_no = vector;
+ 		return csky_alignment(regs);
+ #endif
+ #ifdef CONFIG_CPU_HAS_FPU
+ 	case VEC_FPE:
++		tsk->thread.trap_no = vector;
+ 		die_if_kernel("Kernel mode FPE", regs, vector);
+ 		return fpu_fpe(regs);
+ 	case VEC_PRIV:
++		tsk->thread.trap_no = vector;
+ 		die_if_kernel("Kernel mode PRIV", regs, vector);
+ 		if (fpu_libc_helper(regs))
+ 			return;
+@@ -164,5 +170,8 @@ asmlinkage void trap_c(struct pt_regs *regs)
+ 		sig = SIGSEGV;
+ 		break;
+ 	}
++
++	tsk->thread.trap_no = vector;
++
+ 	send_sig(sig, current, 0);
+ }
+diff --git a/arch/csky/mm/fault.c b/arch/csky/mm/fault.c
+index f76618b630f9..562c7f708749 100644
+--- a/arch/csky/mm/fault.c
++++ b/arch/csky/mm/fault.c
+@@ -179,11 +179,14 @@ bad_area:
+ bad_area_nosemaphore:
+ 	/* User mode accesses just cause a SIGSEGV */
+ 	if (user_mode(regs)) {
++		tsk->thread.trap_no = (regs->sr >> 16) & 0xff;
+ 		force_sig_fault(SIGSEGV, si_code, (void __user *)address);
+ 		return;
+ 	}
+ 
+ no_context:
++	tsk->thread.trap_no = (regs->sr >> 16) & 0xff;
++
+ 	/* Are we prepared to handle this kernel fault? */
+ 	if (fixup_exception(regs))
+ 		return;
+@@ -198,6 +201,8 @@ no_context:
+ 	die_if_kernel("Oops", regs, write);
+ 
+ out_of_memory:
++	tsk->thread.trap_no = (regs->sr >> 16) & 0xff;
++
+ 	/*
+ 	 * We ran out of memory, call the OOM killer, and return the userspace
+ 	 * (which will retry the fault, or kill us if we got oom-killed).
+@@ -206,6 +211,8 @@ out_of_memory:
+ 	return;
+ 
+ do_sigbus:
++	tsk->thread.trap_no = (regs->sr >> 16) & 0xff;
++
+ 	up_read(&mm->mmap_sem);
+ 
+ 	/* Kernel mode? Handle exceptions or die */
+diff --git a/arch/mips/boot/dts/ingenic/ci20.dts b/arch/mips/boot/dts/ingenic/ci20.dts
+index c340f947baa0..fc4e64200c3d 100644
+--- a/arch/mips/boot/dts/ingenic/ci20.dts
++++ b/arch/mips/boot/dts/ingenic/ci20.dts
+@@ -62,6 +62,11 @@
+ 		enable-active-high;
+ 	};
+ 
++	ir: ir {
++		compatible = "gpio-ir-receiver";
++		gpios = <&gpe 3 GPIO_ACTIVE_LOW>;
++	};
++
+ 	wlan0_power: fixedregulator@1 {
+ 		compatible = "regulator-fixed";
+ 		regulator-name = "wlan0_power";
+diff --git a/arch/powerpc/kernel/prom_init.c b/arch/powerpc/kernel/prom_init.c
+index 577345382b23..673f13b87db1 100644
+--- a/arch/powerpc/kernel/prom_init.c
++++ b/arch/powerpc/kernel/prom_init.c
+@@ -1773,6 +1773,9 @@ static void __init prom_rtas_os_term(char *str)
+ 	if (token == 0)
+ 		prom_panic("Could not get token for ibm,os-term\n");
+ 	os_term_args.token = cpu_to_be32(token);
++	os_term_args.nargs = cpu_to_be32(1);
++	os_term_args.nret = cpu_to_be32(1);
++	os_term_args.args[0] = cpu_to_be32(__pa(str));
+ 	prom_rtas_hcall((uint64_t)&os_term_args);
+ }
+ #endif /* CONFIG_PPC_SVM */
+diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
+index 2cefd071b848..c0c43a733830 100644
+--- a/arch/powerpc/kvm/book3s_hv.c
++++ b/arch/powerpc/kvm/book3s_hv.c
+@@ -3616,6 +3616,7 @@ int kvmhv_p9_guest_entry(struct kvm_vcpu *vcpu, u64 time_limit,
+ 		if (trap == BOOK3S_INTERRUPT_SYSCALL && !vcpu->arch.nested &&
+ 		    kvmppc_get_gpr(vcpu, 3) == H_CEDE) {
+ 			kvmppc_nested_cede(vcpu);
++			kvmppc_set_gpr(vcpu, 3, 0);
+ 			trap = 0;
+ 		}
+ 	} else {
+diff --git a/arch/powerpc/platforms/maple/setup.c b/arch/powerpc/platforms/maple/setup.c
+index 6f019df37916..15b2c6eb506d 100644
+--- a/arch/powerpc/platforms/maple/setup.c
++++ b/arch/powerpc/platforms/maple/setup.c
+@@ -291,23 +291,6 @@ static int __init maple_probe(void)
+ 	return 1;
+ }
+ 
+-define_machine(maple) {
+-	.name			= "Maple",
+-	.probe			= maple_probe,
+-	.setup_arch		= maple_setup_arch,
+-	.init_IRQ		= maple_init_IRQ,
+-	.pci_irq_fixup		= maple_pci_irq_fixup,
+-	.pci_get_legacy_ide_irq	= maple_pci_get_legacy_ide_irq,
+-	.restart		= maple_restart,
+-	.halt			= maple_halt,
+-       	.get_boot_time		= maple_get_boot_time,
+-       	.set_rtc_time		= maple_set_rtc_time,
+-       	.get_rtc_time		= maple_get_rtc_time,
+-      	.calibrate_decr		= generic_calibrate_decr,
+-	.progress		= maple_progress,
+-	.power_save		= power4_idle,
+-};
+-
+ #ifdef CONFIG_EDAC
+ /*
+  * Register a platform device for CPC925 memory controller on
+@@ -364,3 +347,20 @@ static int __init maple_cpc925_edac_setup(void)
+ }
+ machine_device_initcall(maple, maple_cpc925_edac_setup);
+ #endif
++
++define_machine(maple) {
++	.name			= "Maple",
++	.probe			= maple_probe,
++	.setup_arch		= maple_setup_arch,
++	.init_IRQ		= maple_init_IRQ,
++	.pci_irq_fixup		= maple_pci_irq_fixup,
++	.pci_get_legacy_ide_irq	= maple_pci_get_legacy_ide_irq,
++	.restart		= maple_restart,
++	.halt			= maple_halt,
++	.get_boot_time		= maple_get_boot_time,
++	.set_rtc_time		= maple_set_rtc_time,
++	.get_rtc_time		= maple_get_rtc_time,
++	.calibrate_decr		= generic_calibrate_decr,
++	.progress		= maple_progress,
++	.power_save		= power4_idle,
++};
+diff --git a/arch/s390/crypto/aes_s390.c b/arch/s390/crypto/aes_s390.c
+index 1c23d84a9097..73044634d342 100644
+--- a/arch/s390/crypto/aes_s390.c
++++ b/arch/s390/crypto/aes_s390.c
+@@ -342,6 +342,7 @@ static int cbc_aes_crypt(struct skcipher_request *req, unsigned long modifier)
+ 		memcpy(walk.iv, param.iv, AES_BLOCK_SIZE);
+ 		ret = skcipher_walk_done(&walk, nbytes - n);
+ 	}
++	memzero_explicit(&param, sizeof(param));
+ 	return ret;
+ }
+ 
+@@ -470,6 +471,8 @@ static int xts_aes_crypt(struct skcipher_request *req, unsigned long modifier)
+ 			 walk.dst.virt.addr, walk.src.virt.addr, n);
+ 		ret = skcipher_walk_done(&walk, nbytes - n);
+ 	}
++	memzero_explicit(&pcc_param, sizeof(pcc_param));
++	memzero_explicit(&xts_param, sizeof(xts_param));
+ 	return ret;
+ }
+ 
+diff --git a/arch/s390/kernel/perf_cpum_sf.c b/arch/s390/kernel/perf_cpum_sf.c
+index b095b1c78987..05b908b3a6b3 100644
+--- a/arch/s390/kernel/perf_cpum_sf.c
++++ b/arch/s390/kernel/perf_cpum_sf.c
+@@ -1576,6 +1576,7 @@ static void hw_collect_aux(struct cpu_hw_sf *cpuhw)
+ 	unsigned long range = 0, size;
+ 	unsigned long long overflow = 0;
+ 	struct perf_output_handle *handle = &cpuhw->handle;
++	unsigned long num_sdb;
+ 
+ 	aux = perf_get_aux(handle);
+ 	if (WARN_ON_ONCE(!aux))
+@@ -1587,13 +1588,14 @@ static void hw_collect_aux(struct cpu_hw_sf *cpuhw)
+ 			    size >> PAGE_SHIFT);
+ 	perf_aux_output_end(handle, size);
+ 
++	num_sdb = aux->sfb.num_sdb;
+ 	while (!done) {
+ 		/* Get an output handle */
+ 		aux = perf_aux_output_begin(handle, cpuhw->event);
+ 		if (handle->size == 0) {
+ 			pr_err("The AUX buffer with %lu pages for the "
+ 			       "diagnostic-sampling mode is full\n",
+-				aux->sfb.num_sdb);
++				num_sdb);
+ 			debug_sprintf_event(sfdbg, 1,
+ 					    "%s: AUX buffer used up\n",
+ 					    __func__);
+diff --git a/arch/s390/kernel/processor.c b/arch/s390/kernel/processor.c
+index 6ebc2117c66c..91b9b3f73de6 100644
+--- a/arch/s390/kernel/processor.c
++++ b/arch/s390/kernel/processor.c
+@@ -165,8 +165,9 @@ static void show_cpu_mhz(struct seq_file *m, unsigned long n)
+ static int show_cpuinfo(struct seq_file *m, void *v)
+ {
+ 	unsigned long n = (unsigned long) v - 1;
++	unsigned long first = cpumask_first(cpu_online_mask);
+ 
+-	if (!n)
++	if (n == first)
+ 		show_cpu_summary(m, v);
+ 	if (!machine_has_cpu_mhz)
+ 		return 0;
+@@ -179,6 +180,8 @@ static inline void *c_update(loff_t *pos)
+ {
+ 	if (*pos)
+ 		*pos = cpumask_next(*pos - 1, cpu_online_mask);
++	else
++		*pos = cpumask_first(cpu_online_mask);
+ 	return *pos < nr_cpu_ids ? (void *)*pos + 1 : NULL;
+ }
+ 
+diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c
+index 9d9ab77d02dd..364e3a89c096 100644
+--- a/arch/s390/mm/gmap.c
++++ b/arch/s390/mm/gmap.c
+@@ -1844,6 +1844,7 @@ int gmap_shadow_r3t(struct gmap *sg, unsigned long saddr, unsigned long r3t,
+ 		goto out_free;
+ 	} else if (*table & _REGION_ENTRY_ORIGIN) {
+ 		rc = -EAGAIN;		/* Race with shadow */
++		goto out_free;
+ 	}
+ 	crst_table_init(s_r3t, _REGION3_ENTRY_EMPTY);
+ 	/* mark as invalid as long as the parent table is not protected */
+diff --git a/arch/um/drivers/ubd_kern.c b/arch/um/drivers/ubd_kern.c
+index 247f95da057b..eca45ad2166c 100644
+--- a/arch/um/drivers/ubd_kern.c
++++ b/arch/um/drivers/ubd_kern.c
+@@ -1607,7 +1607,9 @@ int io_thread(void *arg)
+ 		written = 0;
+ 
+ 		do {
+-			res = os_write_file(kernel_fd, ((char *) io_req_buffer) + written, n);
++			res = os_write_file(kernel_fd,
++					    ((char *) io_req_buffer) + written,
++					    n - written);
+ 			if (res >= 0) {
+ 				written += res;
+ 			}
+diff --git a/arch/um/os-Linux/file.c b/arch/um/os-Linux/file.c
+index fbda10535dab..5c819f89b8c2 100644
+--- a/arch/um/os-Linux/file.c
++++ b/arch/um/os-Linux/file.c
+@@ -8,6 +8,7 @@
+ #include <errno.h>
+ #include <fcntl.h>
+ #include <signal.h>
++#include <linux/falloc.h>
+ #include <sys/ioctl.h>
+ #include <sys/mount.h>
+ #include <sys/socket.h>
+diff --git a/arch/x86/hyperv/hv_init.c b/arch/x86/hyperv/hv_init.c
+index b0da5320bcff..624f5d9b0f79 100644
+--- a/arch/x86/hyperv/hv_init.c
++++ b/arch/x86/hyperv/hv_init.c
+@@ -20,6 +20,7 @@
+ #include <linux/mm.h>
+ #include <linux/hyperv.h>
+ #include <linux/slab.h>
++#include <linux/kernel.h>
+ #include <linux/cpuhotplug.h>
+ #include <linux/syscore_ops.h>
+ #include <clocksource/hyperv_timer.h>
+@@ -419,11 +420,14 @@ void hyperv_cleanup(void)
+ }
+ EXPORT_SYMBOL_GPL(hyperv_cleanup);
+ 
+-void hyperv_report_panic(struct pt_regs *regs, long err)
++void hyperv_report_panic(struct pt_regs *regs, long err, bool in_die)
+ {
+ 	static bool panic_reported;
+ 	u64 guest_id;
+ 
++	if (in_die && !panic_on_oops)
++		return;
++
+ 	/*
+ 	 * We prefer to report panic on 'die' chain as we have proper
+ 	 * registers to report, but if we miss it (e.g. on BUG()) we need
+diff --git a/arch/x86/kernel/acpi/cstate.c b/arch/x86/kernel/acpi/cstate.c
+index caf2edccbad2..49ae4e1ac9cd 100644
+--- a/arch/x86/kernel/acpi/cstate.c
++++ b/arch/x86/kernel/acpi/cstate.c
+@@ -161,7 +161,8 @@ int acpi_processor_ffh_cstate_probe(unsigned int cpu,
+ 
+ 	/* Make sure we are running on right CPU */
+ 
+-	retval = work_on_cpu(cpu, acpi_processor_ffh_cstate_probe_cpu, cx);
++	retval = call_on_cpu(cpu, acpi_processor_ffh_cstate_probe_cpu, cx,
++			     false);
+ 	if (retval == 0) {
+ 		/* Use the hint in CST */
+ 		percpu_entry->states[cx->index].eax = cx->address;
+diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c
+index caa032ce3fe3..5e296a7e6036 100644
+--- a/arch/x86/kernel/cpu/mshyperv.c
++++ b/arch/x86/kernel/cpu/mshyperv.c
+@@ -263,6 +263,16 @@ static void __init ms_hyperv_init_platform(void)
+ 			cpuid_eax(HYPERV_CPUID_NESTED_FEATURES);
+ 	}
+ 
++	/*
++	 * Hyper-V expects to get crash register data or kmsg when
++	 * crash enlightment is available and system crashes. Set
++	 * crash_kexec_post_notifiers to be true to make sure that
++	 * calling crash enlightment interface before running kdump
++	 * kernel.
++	 */
++	if (ms_hyperv.misc_features & HV_FEATURE_GUEST_CRASH_MSR_AVAILABLE)
++		crash_kexec_post_notifiers = true;
++
+ #ifdef CONFIG_X86_LOCAL_APIC
+ 	if (ms_hyperv.features & HV_X64_ACCESS_FREQUENCY_MSRS &&
+ 	    ms_hyperv.misc_features & HV_FEATURE_FREQUENCY_MSRS_AVAILABLE) {
+diff --git a/arch/x86/xen/xen-head.S b/arch/x86/xen/xen-head.S
+index 1d0cee3163e4..1e900676722b 100644
+--- a/arch/x86/xen/xen-head.S
++++ b/arch/x86/xen/xen-head.S
+@@ -35,7 +35,11 @@ SYM_CODE_START(startup_xen)
+ 	rep __ASM_SIZE(stos)
+ 
+ 	mov %_ASM_SI, xen_start_info
+-	mov $init_thread_union+THREAD_SIZE, %_ASM_SP
++#ifdef CONFIG_X86_64
++	mov initial_stack(%rip), %rsp
++#else
++	mov initial_stack, %esp
++#endif
+ 
+ #ifdef CONFIG_X86_64
+ 	/* Set up %gs.
+@@ -51,7 +55,7 @@ SYM_CODE_START(startup_xen)
+ 	wrmsr
+ #endif
+ 
+-	jmp xen_start_kernel
++	call xen_start_kernel
+ SYM_CODE_END(startup_xen)
+ 	__FINIT
+ #endif
+diff --git a/block/bfq-cgroup.c b/block/bfq-cgroup.c
+index 9d963ed518d1..68882b9b8f11 100644
+--- a/block/bfq-cgroup.c
++++ b/block/bfq-cgroup.c
+@@ -714,10 +714,7 @@ static struct bfq_group *__bfq_bic_change_cgroup(struct bfq_data *bfqd,
+ 
+ 		if (entity->sched_data != &bfqg->sched_data) {
+ 			bic_set_bfqq(bic, NULL, 0);
+-			bfq_log_bfqq(bfqd, async_bfqq,
+-				     "bic_change_group: %p %d",
+-				     async_bfqq, async_bfqq->ref);
+-			bfq_put_queue(async_bfqq);
++			bfq_release_process_ref(bfqd, async_bfqq);
+ 		}
+ 	}
+ 
+@@ -818,39 +815,53 @@ static void bfq_flush_idle_tree(struct bfq_service_tree *st)
+ /**
+  * bfq_reparent_leaf_entity - move leaf entity to the root_group.
+  * @bfqd: the device data structure with the root group.
+- * @entity: the entity to move.
++ * @entity: the entity to move, if entity is a leaf; or the parent entity
++ *	    of an active leaf entity to move, if entity is not a leaf.
+  */
+ static void bfq_reparent_leaf_entity(struct bfq_data *bfqd,
+-				     struct bfq_entity *entity)
++				     struct bfq_entity *entity,
++				     int ioprio_class)
+ {
+-	struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity);
++	struct bfq_queue *bfqq;
++	struct bfq_entity *child_entity = entity;
++
++	while (child_entity->my_sched_data) { /* leaf not reached yet */
++		struct bfq_sched_data *child_sd = child_entity->my_sched_data;
++		struct bfq_service_tree *child_st = child_sd->service_tree +
++			ioprio_class;
++		struct rb_root *child_active = &child_st->active;
++
++		child_entity = bfq_entity_of(rb_first(child_active));
++
++		if (!child_entity)
++			child_entity = child_sd->in_service_entity;
++	}
+ 
++	bfqq = bfq_entity_to_bfqq(child_entity);
+ 	bfq_bfqq_move(bfqd, bfqq, bfqd->root_group);
+ }
+ 
+ /**
+- * bfq_reparent_active_entities - move to the root group all active
+- *                                entities.
++ * bfq_reparent_active_queues - move to the root group all active queues.
+  * @bfqd: the device data structure with the root group.
+  * @bfqg: the group to move from.
+- * @st: the service tree with the entities.
++ * @st: the service tree to start the search from.
+  */
+-static void bfq_reparent_active_entities(struct bfq_data *bfqd,
+-					 struct bfq_group *bfqg,
+-					 struct bfq_service_tree *st)
++static void bfq_reparent_active_queues(struct bfq_data *bfqd,
++				       struct bfq_group *bfqg,
++				       struct bfq_service_tree *st,
++				       int ioprio_class)
+ {
+ 	struct rb_root *active = &st->active;
+-	struct bfq_entity *entity = NULL;
+-
+-	if (!RB_EMPTY_ROOT(&st->active))
+-		entity = bfq_entity_of(rb_first(active));
++	struct bfq_entity *entity;
+ 
+-	for (; entity ; entity = bfq_entity_of(rb_first(active)))
+-		bfq_reparent_leaf_entity(bfqd, entity);
++	while ((entity = bfq_entity_of(rb_first(active))))
++		bfq_reparent_leaf_entity(bfqd, entity, ioprio_class);
+ 
+ 	if (bfqg->sched_data.in_service_entity)
+ 		bfq_reparent_leaf_entity(bfqd,
+-			bfqg->sched_data.in_service_entity);
++					 bfqg->sched_data.in_service_entity,
++					 ioprio_class);
+ }
+ 
+ /**
+@@ -882,13 +893,6 @@ static void bfq_pd_offline(struct blkg_policy_data *pd)
+ 	for (i = 0; i < BFQ_IOPRIO_CLASSES; i++) {
+ 		st = bfqg->sched_data.service_tree + i;
+ 
+-		/*
+-		 * The idle tree may still contain bfq_queues belonging
+-		 * to exited task because they never migrated to a different
+-		 * cgroup from the one being destroyed now.
+-		 */
+-		bfq_flush_idle_tree(st);
+-
+ 		/*
+ 		 * It may happen that some queues are still active
+ 		 * (busy) upon group destruction (if the corresponding
+@@ -901,7 +905,20 @@ static void bfq_pd_offline(struct blkg_policy_data *pd)
+ 		 * There is no need to put the sync queues, as the
+ 		 * scheduler has taken no reference.
+ 		 */
+-		bfq_reparent_active_entities(bfqd, bfqg, st);
++		bfq_reparent_active_queues(bfqd, bfqg, st, i);
++
++		/*
++		 * The idle tree may still contain bfq_queues
++		 * belonging to exited task because they never
++		 * migrated to a different cgroup from the one being
++		 * destroyed now. In addition, even
++		 * bfq_reparent_active_queues() may happen to add some
++		 * entities to the idle tree. It happens if, in some
++		 * of the calls to bfq_bfqq_move() performed by
++		 * bfq_reparent_active_queues(), the queue to move is
++		 * empty and gets expired.
++		 */
++		bfq_flush_idle_tree(st);
+ 	}
+ 
+ 	__bfq_deactivate_entity(entity, false);
+diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
+index 4a44c7f19435..78ba57efd16b 100644
+--- a/block/bfq-iosched.c
++++ b/block/bfq-iosched.c
+@@ -2716,8 +2716,6 @@ static void bfq_bfqq_save_state(struct bfq_queue *bfqq)
+ 	}
+ }
+ 
+-
+-static
+ void bfq_release_process_ref(struct bfq_data *bfqd, struct bfq_queue *bfqq)
+ {
+ 	/*
+diff --git a/block/bfq-iosched.h b/block/bfq-iosched.h
+index d1233af9c684..cd224aaf9f52 100644
+--- a/block/bfq-iosched.h
++++ b/block/bfq-iosched.h
+@@ -955,6 +955,7 @@ void bfq_bfqq_expire(struct bfq_data *bfqd, struct bfq_queue *bfqq,
+ 		     bool compensate, enum bfqq_expiration reason);
+ void bfq_put_queue(struct bfq_queue *bfqq);
+ void bfq_end_wr_async_queues(struct bfq_data *bfqd, struct bfq_group *bfqg);
++void bfq_release_process_ref(struct bfq_data *bfqd, struct bfq_queue *bfqq);
+ void bfq_schedule_dispatch(struct bfq_data *bfqd);
+ void bfq_put_async_queues(struct bfq_data *bfqd, struct bfq_group *bfqg);
+ 
+diff --git a/drivers/acpi/acpica/acnamesp.h b/drivers/acpi/acpica/acnamesp.h
+index e618ddfab2fd..40f6a3c33a15 100644
+--- a/drivers/acpi/acpica/acnamesp.h
++++ b/drivers/acpi/acpica/acnamesp.h
+@@ -256,6 +256,8 @@ u32
+ acpi_ns_build_normalized_path(struct acpi_namespace_node *node,
+ 			      char *full_path, u32 path_size, u8 no_trailing);
+ 
++void acpi_ns_normalize_pathname(char *original_path);
++
+ char *acpi_ns_get_normalized_pathname(struct acpi_namespace_node *node,
+ 				      u8 no_trailing);
+ 
+diff --git a/drivers/acpi/acpica/dbinput.c b/drivers/acpi/acpica/dbinput.c
+index aa71f65395d2..ee6a1b77af3f 100644
+--- a/drivers/acpi/acpica/dbinput.c
++++ b/drivers/acpi/acpica/dbinput.c
+@@ -468,16 +468,14 @@ char *acpi_db_get_next_token(char *string,
+ 		return (NULL);
+ 	}
+ 
+-	/* Remove any spaces at the beginning */
++	/* Remove any spaces at the beginning, ignore blank lines */
+ 
+-	if (*string == ' ') {
+-		while (*string && (*string == ' ')) {
+-			string++;
+-		}
++	while (*string && isspace(*string)) {
++		string++;
++	}
+ 
+-		if (!(*string)) {
+-			return (NULL);
+-		}
++	if (!(*string)) {
++		return (NULL);
+ 	}
+ 
+ 	switch (*string) {
+@@ -570,7 +568,7 @@ char *acpi_db_get_next_token(char *string,
+ 
+ 		/* Find end of token */
+ 
+-		while (*string && (*string != ' ')) {
++		while (*string && !isspace(*string)) {
+ 			string++;
+ 		}
+ 		break;
+diff --git a/drivers/acpi/acpica/dswexec.c b/drivers/acpi/acpica/dswexec.c
+index 5e81a1ae44cf..1d4f8c81028c 100644
+--- a/drivers/acpi/acpica/dswexec.c
++++ b/drivers/acpi/acpica/dswexec.c
+@@ -16,6 +16,9 @@
+ #include "acinterp.h"
+ #include "acnamesp.h"
+ #include "acdebug.h"
++#ifdef ACPI_EXEC_APP
++#include "aecommon.h"
++#endif
+ 
+ #define _COMPONENT          ACPI_DISPATCHER
+ ACPI_MODULE_NAME("dswexec")
+@@ -329,6 +332,10 @@ acpi_status acpi_ds_exec_end_op(struct acpi_walk_state *walk_state)
+ 	u32 op_class;
+ 	union acpi_parse_object *next_op;
+ 	union acpi_parse_object *first_arg;
++#ifdef ACPI_EXEC_APP
++	char *namepath;
++	union acpi_operand_object *obj_desc;
++#endif
+ 
+ 	ACPI_FUNCTION_TRACE_PTR(ds_exec_end_op, walk_state);
+ 
+@@ -537,6 +544,32 @@ acpi_status acpi_ds_exec_end_op(struct acpi_walk_state *walk_state)
+ 
+ 			status =
+ 			    acpi_ds_eval_buffer_field_operands(walk_state, op);
++			if (ACPI_FAILURE(status)) {
++				break;
++			}
++#ifdef ACPI_EXEC_APP
++			/*
++			 * acpi_exec support for namespace initialization file (initialize
++			 * buffer_fields in this code.)
++			 */
++			namepath =
++			    acpi_ns_get_external_pathname(op->common.node);
++			status = ae_lookup_init_file_entry(namepath, &obj_desc);
++			if (ACPI_SUCCESS(status)) {
++				status =
++				    acpi_ex_write_data_to_field(obj_desc,
++								op->common.
++								node->object,
++								NULL);
++				if ACPI_FAILURE
++					(status) {
++					ACPI_EXCEPTION((AE_INFO, status,
++							"While writing to buffer field"));
++					}
++			}
++			ACPI_FREE(namepath);
++			status = AE_OK;
++#endif
+ 			break;
+ 
+ 		case AML_TYPE_CREATE_OBJECT:
+diff --git a/drivers/acpi/acpica/dswload.c b/drivers/acpi/acpica/dswload.c
+index 697974e37edf..27069325b6de 100644
+--- a/drivers/acpi/acpica/dswload.c
++++ b/drivers/acpi/acpica/dswload.c
+@@ -14,7 +14,6 @@
+ #include "acdispat.h"
+ #include "acinterp.h"
+ #include "acnamesp.h"
+-
+ #ifdef ACPI_ASL_COMPILER
+ #include "acdisasm.h"
+ #endif
+@@ -399,7 +398,6 @@ acpi_status acpi_ds_load1_end_op(struct acpi_walk_state *walk_state)
+ 	union acpi_parse_object *op;
+ 	acpi_object_type object_type;
+ 	acpi_status status = AE_OK;
+-
+ #ifdef ACPI_ASL_COMPILER
+ 	u8 param_count;
+ #endif
+diff --git a/drivers/acpi/acpica/dswload2.c b/drivers/acpi/acpica/dswload2.c
+index b31457ca926c..edadbe146506 100644
+--- a/drivers/acpi/acpica/dswload2.c
++++ b/drivers/acpi/acpica/dswload2.c
+@@ -15,6 +15,9 @@
+ #include "acinterp.h"
+ #include "acnamesp.h"
+ #include "acevents.h"
++#ifdef ACPI_EXEC_APP
++#include "aecommon.h"
++#endif
+ 
+ #define _COMPONENT          ACPI_DISPATCHER
+ ACPI_MODULE_NAME("dswload2")
+@@ -373,6 +376,10 @@ acpi_status acpi_ds_load2_end_op(struct acpi_walk_state *walk_state)
+ 	struct acpi_namespace_node *new_node;
+ 	u32 i;
+ 	u8 region_space;
++#ifdef ACPI_EXEC_APP
++	union acpi_operand_object *obj_desc;
++	char *namepath;
++#endif
+ 
+ 	ACPI_FUNCTION_TRACE(ds_load2_end_op);
+ 
+@@ -466,6 +473,11 @@ acpi_status acpi_ds_load2_end_op(struct acpi_walk_state *walk_state)
+ 		 * be evaluated later during the execution phase
+ 		 */
+ 		status = acpi_ds_create_buffer_field(op, walk_state);
++		if (ACPI_FAILURE(status)) {
++			ACPI_EXCEPTION((AE_INFO, status,
++					"CreateBufferField failure"));
++			goto cleanup;
++			}
+ 		break;
+ 
+ 	case AML_TYPE_NAMED_FIELD:
+@@ -604,6 +616,29 @@ acpi_status acpi_ds_load2_end_op(struct acpi_walk_state *walk_state)
+ 		case AML_NAME_OP:
+ 
+ 			status = acpi_ds_create_node(walk_state, node, op);
++			if (ACPI_FAILURE(status)) {
++				goto cleanup;
++			}
++#ifdef ACPI_EXEC_APP
++			/*
++			 * acpi_exec support for namespace initialization file (initialize
++			 * Name opcodes in this code.)
++			 */
++			namepath = acpi_ns_get_external_pathname(node);
++			status = ae_lookup_init_file_entry(namepath, &obj_desc);
++			if (ACPI_SUCCESS(status)) {
++
++				/* Detach any existing object, attach new object */
++
++				if (node->object) {
++					acpi_ns_detach_object(node);
++				}
++				acpi_ns_attach_object(node, obj_desc,
++						      obj_desc->common.type);
++			}
++			ACPI_FREE(namepath);
++			status = AE_OK;
++#endif
+ 			break;
+ 
+ 		case AML_METHOD_OP:
+diff --git a/drivers/acpi/acpica/nsnames.c b/drivers/acpi/acpica/nsnames.c
+index 370bbc867745..c717fff7d9b5 100644
+--- a/drivers/acpi/acpica/nsnames.c
++++ b/drivers/acpi/acpica/nsnames.c
+@@ -13,9 +13,6 @@
+ #define _COMPONENT          ACPI_NAMESPACE
+ ACPI_MODULE_NAME("nsnames")
+ 
+-/* Local Prototypes */
+-static void acpi_ns_normalize_pathname(char *original_path);
+-
+ /*******************************************************************************
+  *
+  * FUNCTION:    acpi_ns_get_external_pathname
+@@ -30,7 +27,6 @@ static void acpi_ns_normalize_pathname(char *original_path);
+  *              for error and debug statements.
+  *
+  ******************************************************************************/
+-
+ char *acpi_ns_get_external_pathname(struct acpi_namespace_node *node)
+ {
+ 	char *name_buffer;
+@@ -411,7 +407,7 @@ cleanup:
+  *
+  ******************************************************************************/
+ 
+-static void acpi_ns_normalize_pathname(char *original_path)
++void acpi_ns_normalize_pathname(char *original_path)
+ {
+ 	char *input_path = original_path;
+ 	char *new_path_buffer;
+diff --git a/drivers/acpi/acpica/utdelete.c b/drivers/acpi/acpica/utdelete.c
+index eee263cb7beb..c365faf4e6cd 100644
+--- a/drivers/acpi/acpica/utdelete.c
++++ b/drivers/acpi/acpica/utdelete.c
+@@ -452,13 +452,13 @@ acpi_ut_update_ref_count(union acpi_operand_object *object, u32 action)
+  *
+  * FUNCTION:    acpi_ut_update_object_reference
+  *
+- * PARAMETERS:  object              - Increment ref count for this object
+- *                                    and all sub-objects
++ * PARAMETERS:  object              - Increment or decrement the ref count for
++ *                                    this object and all sub-objects
+  *              action              - Either REF_INCREMENT or REF_DECREMENT
+  *
+  * RETURN:      Status
+  *
+- * DESCRIPTION: Increment the object reference count
++ * DESCRIPTION: Increment or decrement the object reference count
+  *
+  * Object references are incremented when:
+  * 1) An object is attached to a Node (namespace object)
+@@ -492,7 +492,7 @@ acpi_ut_update_object_reference(union acpi_operand_object *object, u16 action)
+ 		}
+ 
+ 		/*
+-		 * All sub-objects must have their reference count incremented
++		 * All sub-objects must have their reference count updated
+ 		 * also. Different object types have different subobjects.
+ 		 */
+ 		switch (object->common.type) {
+@@ -559,6 +559,7 @@ acpi_ut_update_object_reference(union acpi_operand_object *object, u16 action)
+ 					break;
+ 				}
+ 			}
++
+ 			next_object = NULL;
+ 			break;
+ 
+diff --git a/drivers/acpi/device_pm.c b/drivers/acpi/device_pm.c
+index b64c62bfcea5..b2263ec67b43 100644
+--- a/drivers/acpi/device_pm.c
++++ b/drivers/acpi/device_pm.c
+@@ -1321,8 +1321,8 @@ int acpi_dev_pm_attach(struct device *dev, bool power_on)
+ 	 */
+ 	static const struct acpi_device_id special_pm_ids[] = {
+ 		{"PNP0C0B", }, /* Generic ACPI fan */
+-		{"INT1044", }, /* Fan for Tiger Lake generation */
+ 		{"INT3404", }, /* Fan */
++		{"INTC1044", }, /* Fan for Tiger Lake generation */
+ 		{}
+ 	};
+ 	struct acpi_device *adev = ACPI_COMPANION(dev);
+diff --git a/drivers/acpi/dptf/dptf_power.c b/drivers/acpi/dptf/dptf_power.c
+index 387f27ef3368..e4e8b75d39f0 100644
+--- a/drivers/acpi/dptf/dptf_power.c
++++ b/drivers/acpi/dptf/dptf_power.c
+@@ -97,8 +97,8 @@ static int dptf_power_remove(struct platform_device *pdev)
+ }
+ 
+ static const struct acpi_device_id int3407_device_ids[] = {
+-	{"INT1047", 0},
+ 	{"INT3407", 0},
++	{"INTC1047", 0},
+ 	{"", 0},
+ };
+ MODULE_DEVICE_TABLE(acpi, int3407_device_ids);
+diff --git a/drivers/acpi/dptf/int340x_thermal.c b/drivers/acpi/dptf/int340x_thermal.c
+index 1ec7b6900662..bc71a6a60334 100644
+--- a/drivers/acpi/dptf/int340x_thermal.c
++++ b/drivers/acpi/dptf/int340x_thermal.c
+@@ -13,10 +13,6 @@
+ 
+ #define INT3401_DEVICE 0X01
+ static const struct acpi_device_id int340x_thermal_device_ids[] = {
+-	{"INT1040"},
+-	{"INT1043"},
+-	{"INT1044"},
+-	{"INT1047"},
+ 	{"INT3400"},
+ 	{"INT3401", INT3401_DEVICE},
+ 	{"INT3402"},
+@@ -28,6 +24,10 @@ static const struct acpi_device_id int340x_thermal_device_ids[] = {
+ 	{"INT3409"},
+ 	{"INT340A"},
+ 	{"INT340B"},
++	{"INTC1040"},
++	{"INTC1043"},
++	{"INTC1044"},
++	{"INTC1047"},
+ 	{""},
+ };
+ 
+diff --git a/drivers/acpi/processor_throttling.c b/drivers/acpi/processor_throttling.c
+index 532a1ae3595a..a0bd56ece3ff 100644
+--- a/drivers/acpi/processor_throttling.c
++++ b/drivers/acpi/processor_throttling.c
+@@ -897,13 +897,6 @@ static long __acpi_processor_get_throttling(void *data)
+ 	return pr->throttling.acpi_processor_get_throttling(pr);
+ }
+ 
+-static int call_on_cpu(int cpu, long (*fn)(void *), void *arg, bool direct)
+-{
+-	if (direct || (is_percpu_thread() && cpu == smp_processor_id()))
+-		return fn(arg);
+-	return work_on_cpu(cpu, fn, arg);
+-}
+-
+ static int acpi_processor_get_throttling(struct acpi_processor *pr)
+ {
+ 	if (!pr)
+diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
+index 6343402c09e6..27b80df49ba2 100644
+--- a/drivers/block/rbd.c
++++ b/drivers/block/rbd.c
+@@ -4554,6 +4554,10 @@ static void cancel_tasks_sync(struct rbd_device *rbd_dev)
+ 	cancel_work_sync(&rbd_dev->unlock_work);
+ }
+ 
++/*
++ * header_rwsem must not be held to avoid a deadlock with
++ * rbd_dev_refresh() when flushing notifies.
++ */
+ static void rbd_unregister_watch(struct rbd_device *rbd_dev)
+ {
+ 	cancel_tasks_sync(rbd_dev);
+@@ -6951,9 +6955,10 @@ static void rbd_print_dne(struct rbd_device *rbd_dev, bool is_snap)
+ 
+ static void rbd_dev_image_release(struct rbd_device *rbd_dev)
+ {
+-	rbd_dev_unprobe(rbd_dev);
+-	if (rbd_dev->opts)
++	if (!rbd_is_ro(rbd_dev))
+ 		rbd_unregister_watch(rbd_dev);
++
++	rbd_dev_unprobe(rbd_dev);
+ 	rbd_dev->image_format = 0;
+ 	kfree(rbd_dev->spec->image_id);
+ 	rbd_dev->spec->image_id = NULL;
+@@ -6964,6 +6969,9 @@ static void rbd_dev_image_release(struct rbd_device *rbd_dev)
+  * device.  If this image is the one being mapped (i.e., not a
+  * parent), initiate a watch on its header object before using that
+  * object to get detailed information about the rbd image.
++ *
++ * On success, returns with header_rwsem held for write if called
++ * with @depth == 0.
+  */
+ static int rbd_dev_image_probe(struct rbd_device *rbd_dev, int depth)
+ {
+@@ -6993,11 +7001,14 @@ static int rbd_dev_image_probe(struct rbd_device *rbd_dev, int depth)
+ 		}
+ 	}
+ 
++	if (!depth)
++		down_write(&rbd_dev->header_rwsem);
++
+ 	ret = rbd_dev_header_info(rbd_dev);
+ 	if (ret) {
+ 		if (ret == -ENOENT && !need_watch)
+ 			rbd_print_dne(rbd_dev, false);
+-		goto err_out_watch;
++		goto err_out_probe;
+ 	}
+ 
+ 	/*
+@@ -7042,10 +7053,11 @@ static int rbd_dev_image_probe(struct rbd_device *rbd_dev, int depth)
+ 	return 0;
+ 
+ err_out_probe:
+-	rbd_dev_unprobe(rbd_dev);
+-err_out_watch:
++	if (!depth)
++		up_write(&rbd_dev->header_rwsem);
+ 	if (need_watch)
+ 		rbd_unregister_watch(rbd_dev);
++	rbd_dev_unprobe(rbd_dev);
+ err_out_format:
+ 	rbd_dev->image_format = 0;
+ 	kfree(rbd_dev->spec->image_id);
+@@ -7107,12 +7119,9 @@ static ssize_t do_rbd_add(struct bus_type *bus,
+ 		goto err_out_rbd_dev;
+ 	}
+ 
+-	down_write(&rbd_dev->header_rwsem);
+ 	rc = rbd_dev_image_probe(rbd_dev, 0);
+-	if (rc < 0) {
+-		up_write(&rbd_dev->header_rwsem);
++	if (rc < 0)
+ 		goto err_out_rbd_dev;
+-	}
+ 
+ 	if (rbd_dev->opts->alloc_size > rbd_dev->layout.object_size) {
+ 		rbd_warn(rbd_dev, "alloc_size adjusted to %u",
+diff --git a/drivers/clk/at91/clk-usb.c b/drivers/clk/at91/clk-usb.c
+index bda92980e015..c0895c993cce 100644
+--- a/drivers/clk/at91/clk-usb.c
++++ b/drivers/clk/at91/clk-usb.c
+@@ -75,6 +75,9 @@ static int at91sam9x5_clk_usb_determine_rate(struct clk_hw *hw,
+ 			tmp_parent_rate = req->rate * div;
+ 			tmp_parent_rate = clk_hw_round_rate(parent,
+ 							   tmp_parent_rate);
++			if (!tmp_parent_rate)
++				continue;
++
+ 			tmp_rate = DIV_ROUND_CLOSEST(tmp_parent_rate, div);
+ 			if (tmp_rate < req->rate)
+ 				tmp_diff = req->rate - tmp_rate;
+diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
+index 95adf6c6db3d..305544b68b8a 100644
+--- a/drivers/clk/clk.c
++++ b/drivers/clk/clk.c
+@@ -2660,12 +2660,14 @@ static int clk_core_get_phase(struct clk_core *core)
+ {
+ 	int ret;
+ 
+-	clk_prepare_lock();
++	lockdep_assert_held(&prepare_lock);
++	if (!core->ops->get_phase)
++		return 0;
++
+ 	/* Always try to update cached phase if possible */
+-	if (core->ops->get_phase)
+-		core->phase = core->ops->get_phase(core->hw);
+-	ret = core->phase;
+-	clk_prepare_unlock();
++	ret = core->ops->get_phase(core->hw);
++	if (ret >= 0)
++		core->phase = ret;
+ 
+ 	return ret;
+ }
+@@ -2679,10 +2681,16 @@ static int clk_core_get_phase(struct clk_core *core)
+  */
+ int clk_get_phase(struct clk *clk)
+ {
++	int ret;
++
+ 	if (!clk)
+ 		return 0;
+ 
+-	return clk_core_get_phase(clk->core);
++	clk_prepare_lock();
++	ret = clk_core_get_phase(clk->core);
++	clk_prepare_unlock();
++
++	return ret;
+ }
+ EXPORT_SYMBOL_GPL(clk_get_phase);
+ 
+@@ -2896,13 +2904,21 @@ static struct hlist_head *orphan_list[] = {
+ static void clk_summary_show_one(struct seq_file *s, struct clk_core *c,
+ 				 int level)
+ {
+-	seq_printf(s, "%*s%-*s %7d %8d %8d %11lu %10lu %5d %6d\n",
++	int phase;
++
++	seq_printf(s, "%*s%-*s %7d %8d %8d %11lu %10lu ",
+ 		   level * 3 + 1, "",
+ 		   30 - level * 3, c->name,
+ 		   c->enable_count, c->prepare_count, c->protect_count,
+-		   clk_core_get_rate(c), clk_core_get_accuracy(c),
+-		   clk_core_get_phase(c),
+-		   clk_core_get_scaled_duty_cycle(c, 100000));
++		   clk_core_get_rate(c), clk_core_get_accuracy(c));
++
++	phase = clk_core_get_phase(c);
++	if (phase >= 0)
++		seq_printf(s, "%5d", phase);
++	else
++		seq_puts(s, "-----");
++
++	seq_printf(s, " %6d\n", clk_core_get_scaled_duty_cycle(c, 100000));
+ }
+ 
+ static void clk_summary_show_subtree(struct seq_file *s, struct clk_core *c,
+@@ -2939,6 +2955,7 @@ DEFINE_SHOW_ATTRIBUTE(clk_summary);
+ 
+ static void clk_dump_one(struct seq_file *s, struct clk_core *c, int level)
+ {
++	int phase;
+ 	unsigned long min_rate, max_rate;
+ 
+ 	clk_core_get_boundaries(c, &min_rate, &max_rate);
+@@ -2952,7 +2969,9 @@ static void clk_dump_one(struct seq_file *s, struct clk_core *c, int level)
+ 	seq_printf(s, "\"min_rate\": %lu,", min_rate);
+ 	seq_printf(s, "\"max_rate\": %lu,", max_rate);
+ 	seq_printf(s, "\"accuracy\": %lu,", clk_core_get_accuracy(c));
+-	seq_printf(s, "\"phase\": %d,", clk_core_get_phase(c));
++	phase = clk_core_get_phase(c);
++	if (phase >= 0)
++		seq_printf(s, "\"phase\": %d,", phase);
+ 	seq_printf(s, "\"duty_cycle\": %u",
+ 		   clk_core_get_scaled_duty_cycle(c, 100000));
+ }
+@@ -3434,14 +3453,11 @@ static int __clk_core_init(struct clk_core *core)
+ 		core->accuracy = 0;
+ 
+ 	/*
+-	 * Set clk's phase.
++	 * Set clk's phase by clk_core_get_phase() caching the phase.
+ 	 * Since a phase is by definition relative to its parent, just
+ 	 * query the current clock phase, or just assume it's in phase.
+ 	 */
+-	if (core->ops->get_phase)
+-		core->phase = core->ops->get_phase(core->hw);
+-	else
+-		core->phase = 0;
++	clk_core_get_phase(core);
+ 
+ 	/*
+ 	 * Set clk's duty cycle.
+diff --git a/drivers/clk/imx/clk-pll14xx.c b/drivers/clk/imx/clk-pll14xx.c
+index 5b0519a81a7a..37e311e1d058 100644
+--- a/drivers/clk/imx/clk-pll14xx.c
++++ b/drivers/clk/imx/clk-pll14xx.c
+@@ -55,8 +55,10 @@ static const struct imx_pll14xx_rate_table imx_pll1416x_tbl[] = {
+ };
+ 
+ static const struct imx_pll14xx_rate_table imx_pll1443x_tbl[] = {
++	PLL_1443X_RATE(1039500000U, 173, 2, 1, 16384),
+ 	PLL_1443X_RATE(650000000U, 325, 3, 2, 0),
+ 	PLL_1443X_RATE(594000000U, 198, 2, 2, 0),
++	PLL_1443X_RATE(519750000U, 173, 2, 2, 16384),
+ 	PLL_1443X_RATE(393216000U, 262, 2, 3, 9437),
+ 	PLL_1443X_RATE(361267200U, 361, 3, 3, 17511),
+ };
+diff --git a/drivers/clk/tegra/clk-tegra-pmc.c b/drivers/clk/tegra/clk-tegra-pmc.c
+index bec3e008335f..5e044ba1ae36 100644
+--- a/drivers/clk/tegra/clk-tegra-pmc.c
++++ b/drivers/clk/tegra/clk-tegra-pmc.c
+@@ -49,16 +49,16 @@ struct pmc_clk_init_data {
+ 
+ static DEFINE_SPINLOCK(clk_out_lock);
+ 
+-static const char *clk_out1_parents[] = { "clk_m", "clk_m_div2",
+-	"clk_m_div4", "extern1",
++static const char *clk_out1_parents[] = { "osc", "osc_div2",
++	"osc_div4", "extern1",
+ };
+ 
+-static const char *clk_out2_parents[] = { "clk_m", "clk_m_div2",
+-	"clk_m_div4", "extern2",
++static const char *clk_out2_parents[] = { "osc", "osc_div2",
++	"osc_div4", "extern2",
+ };
+ 
+-static const char *clk_out3_parents[] = { "clk_m", "clk_m_div2",
+-	"clk_m_div4", "extern3",
++static const char *clk_out3_parents[] = { "osc", "osc_div2",
++	"osc_div4", "extern3",
+ };
+ 
+ static struct pmc_clk_init_data pmc_clks[] = {
+diff --git a/drivers/crypto/qce/dma.c b/drivers/crypto/qce/dma.c
+index 7da893dc00e7..46db5bf366b4 100644
+--- a/drivers/crypto/qce/dma.c
++++ b/drivers/crypto/qce/dma.c
+@@ -48,9 +48,10 @@ void qce_dma_release(struct qce_dma_data *dma)
+ 
+ struct scatterlist *
+ qce_sgtable_add(struct sg_table *sgt, struct scatterlist *new_sgl,
+-		int max_ents)
++		unsigned int max_len)
+ {
+ 	struct scatterlist *sg = sgt->sgl, *sg_last = NULL;
++	unsigned int new_len;
+ 
+ 	while (sg) {
+ 		if (!sg_page(sg))
+@@ -61,13 +62,13 @@ qce_sgtable_add(struct sg_table *sgt, struct scatterlist *new_sgl,
+ 	if (!sg)
+ 		return ERR_PTR(-EINVAL);
+ 
+-	while (new_sgl && sg && max_ents) {
+-		sg_set_page(sg, sg_page(new_sgl), new_sgl->length,
+-			    new_sgl->offset);
++	while (new_sgl && sg && max_len) {
++		new_len = new_sgl->length > max_len ? max_len : new_sgl->length;
++		sg_set_page(sg, sg_page(new_sgl), new_len, new_sgl->offset);
+ 		sg_last = sg;
+ 		sg = sg_next(sg);
+ 		new_sgl = sg_next(new_sgl);
+-		max_ents--;
++		max_len -= new_len;
+ 	}
+ 
+ 	return sg_last;
+diff --git a/drivers/crypto/qce/dma.h b/drivers/crypto/qce/dma.h
+index ed25a0d9829e..786402169360 100644
+--- a/drivers/crypto/qce/dma.h
++++ b/drivers/crypto/qce/dma.h
+@@ -43,6 +43,6 @@ void qce_dma_issue_pending(struct qce_dma_data *dma);
+ int qce_dma_terminate_all(struct qce_dma_data *dma);
+ struct scatterlist *
+ qce_sgtable_add(struct sg_table *sgt, struct scatterlist *sg_add,
+-		int max_ents);
++		unsigned int max_len);
+ 
+ #endif /* _DMA_H_ */
+diff --git a/drivers/crypto/qce/skcipher.c b/drivers/crypto/qce/skcipher.c
+index 4217b745f124..63ae75809cb7 100644
+--- a/drivers/crypto/qce/skcipher.c
++++ b/drivers/crypto/qce/skcipher.c
+@@ -97,13 +97,14 @@ qce_skcipher_async_req_handle(struct crypto_async_request *async_req)
+ 
+ 	sg_init_one(&rctx->result_sg, qce->dma.result_buf, QCE_RESULT_BUF_SZ);
+ 
+-	sg = qce_sgtable_add(&rctx->dst_tbl, req->dst, rctx->dst_nents - 1);
++	sg = qce_sgtable_add(&rctx->dst_tbl, req->dst, req->cryptlen);
+ 	if (IS_ERR(sg)) {
+ 		ret = PTR_ERR(sg);
+ 		goto error_free;
+ 	}
+ 
+-	sg = qce_sgtable_add(&rctx->dst_tbl, &rctx->result_sg, 1);
++	sg = qce_sgtable_add(&rctx->dst_tbl, &rctx->result_sg,
++			     QCE_RESULT_BUF_SZ);
+ 	if (IS_ERR(sg)) {
+ 		ret = PTR_ERR(sg);
+ 		goto error_free;
+diff --git a/drivers/dma/idxd/device.c b/drivers/dma/idxd/device.c
+index ada69e722f84..f6f49f0f6fae 100644
+--- a/drivers/dma/idxd/device.c
++++ b/drivers/dma/idxd/device.c
+@@ -584,11 +584,11 @@ static void idxd_group_flags_setup(struct idxd_device *idxd)
+ 		struct idxd_group *group = &idxd->groups[i];
+ 
+ 		if (group->tc_a == -1)
+-			group->grpcfg.flags.tc_a = 0;
++			group->tc_a = group->grpcfg.flags.tc_a = 0;
+ 		else
+ 			group->grpcfg.flags.tc_a = group->tc_a;
+ 		if (group->tc_b == -1)
+-			group->grpcfg.flags.tc_b = 1;
++			group->tc_b = group->grpcfg.flags.tc_b = 1;
+ 		else
+ 			group->grpcfg.flags.tc_b = group->tc_b;
+ 		group->grpcfg.flags.use_token_limit = group->use_token_limit;
+diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_device.c b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+index 2a9e40131735..0d70cb2248fe 100644
+--- a/drivers/gpu/drm/amd/amdkfd/kfd_device.c
++++ b/drivers/gpu/drm/amd/amdkfd/kfd_device.c
+@@ -1104,9 +1104,9 @@ kfd_gtt_out:
+ 	return 0;
+ 
+ kfd_gtt_no_free_chunk:
+-	pr_debug("Allocation failed with mem_obj = %p\n", mem_obj);
++	pr_debug("Allocation failed with mem_obj = %p\n", *mem_obj);
+ 	mutex_unlock(&kfd->gtt_sa_lock);
+-	kfree(mem_obj);
++	kfree(*mem_obj);
+ 	return -ENOMEM;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c
+index 0acd3409dd6c..3abeff7722e3 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c
+@@ -113,10 +113,13 @@ void hdcp_update_display(struct hdcp_workqueue *hdcp_work,
+ 
+ 		if (enable_encryption) {
+ 			display->adjust.disable = 0;
+-			if (content_type == DRM_MODE_HDCP_CONTENT_TYPE0)
++			if (content_type == DRM_MODE_HDCP_CONTENT_TYPE0) {
++				hdcp_w->link.adjust.hdcp1.disable = 0;
+ 				hdcp_w->link.adjust.hdcp2.force_type = MOD_HDCP_FORCE_TYPE_0;
+-			else if (content_type == DRM_MODE_HDCP_CONTENT_TYPE1)
++			} else if (content_type == DRM_MODE_HDCP_CONTENT_TYPE1) {
++				hdcp_w->link.adjust.hdcp1.disable = 1;
+ 				hdcp_w->link.adjust.hdcp2.force_type = MOD_HDCP_FORCE_TYPE_1;
++			}
+ 
+ 			schedule_delayed_work(&hdcp_w->property_validate_dwork,
+ 					      msecs_to_jiffies(DRM_HDCP_CHECK_PERIOD_MS));
+@@ -334,6 +337,7 @@ static void update_config(void *handle, struct cp_psp_stream_config *config)
+ 	link->dp.rev = aconnector->dc_link->dpcd_caps.dpcd_rev.raw;
+ 	display->adjust.disable = 1;
+ 	link->adjust.auth_delay = 2;
++	link->adjust.hdcp1.disable = 0;
+ 
+ 	hdcp_update_display(hdcp_work, link_index, aconnector, DRM_MODE_HDCP_CONTENT_TYPE0, false);
+ }
+diff --git a/drivers/gpu/drm/nouveau/nouveau_drm.c b/drivers/gpu/drm/nouveau/nouveau_drm.c
+index b65ae817eabf..2d4c899e1f8b 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_drm.c
++++ b/drivers/gpu/drm/nouveau/nouveau_drm.c
+@@ -618,6 +618,64 @@ nouveau_drm_device_fini(struct drm_device *dev)
+ 	kfree(drm);
+ }
+ 
++/*
++ * On some Intel PCIe bridge controllers doing a
++ * D0 -> D3hot -> D3cold -> D0 sequence causes Nvidia GPUs to not reappear.
++ * Skipping the intermediate D3hot step seems to make it work again. This is
++ * probably caused by not meeting the expectation the involved AML code has
++ * when the GPU is put into D3hot state before invoking it.
++ *
++ * This leads to various manifestations of this issue:
++ *  - AML code execution to power on the GPU hits an infinite loop (as the
++ *    code waits on device memory to change).
++ *  - kernel crashes, as all PCI reads return -1, which most code isn't able
++ *    to handle well enough.
++ *
++ * In all cases dmesg will contain at least one line like this:
++ * 'nouveau 0000:01:00.0: Refused to change power state, currently in D3'
++ * followed by a lot of nouveau timeouts.
++ *
++ * In the \_SB.PCI0.PEG0.PG00._OFF code deeper down writes bit 0x80 to the not
++ * documented PCI config space register 0x248 of the Intel PCIe bridge
++ * controller (0x1901) in order to change the state of the PCIe link between
++ * the PCIe port and the GPU. There are alternative code paths using other
++ * registers, which seem to work fine (executed pre Windows 8):
++ *  - 0xbc bit 0x20 (publicly available documentation claims 'reserved')
++ *  - 0xb0 bit 0x10 (link disable)
++ * Changing the conditions inside the firmware by poking into the relevant
++ * addresses does resolve the issue, but it seemed to be ACPI private memory
++ * and not any device accessible memory at all, so there is no portable way of
++ * changing the conditions.
++ * On a XPS 9560 that means bits [0,3] on \CPEX need to be cleared.
++ *
++ * The only systems where this behavior can be seen are hybrid graphics laptops
++ * with a secondary Nvidia Maxwell, Pascal or Turing GPU. It's unclear whether
++ * this issue only occurs in combination with listed Intel PCIe bridge
++ * controllers and the mentioned GPUs or other devices as well.
++ *
++ * documentation on the PCIe bridge controller can be found in the
++ * "7th Generation Intel® Processor Families for H Platforms Datasheet Volume 2"
++ * Section "12 PCI Express* Controller (x16) Registers"
++ */
++
++static void quirk_broken_nv_runpm(struct pci_dev *pdev)
++{
++	struct drm_device *dev = pci_get_drvdata(pdev);
++	struct nouveau_drm *drm = nouveau_drm(dev);
++	struct pci_dev *bridge = pci_upstream_bridge(pdev);
++
++	if (!bridge || bridge->vendor != PCI_VENDOR_ID_INTEL)
++		return;
++
++	switch (bridge->device) {
++	case 0x1901:
++		drm->old_pm_cap = pdev->pm_cap;
++		pdev->pm_cap = 0;
++		NV_INFO(drm, "Disabling PCI power management to avoid bug\n");
++		break;
++	}
++}
++
+ static int nouveau_drm_probe(struct pci_dev *pdev,
+ 			     const struct pci_device_id *pent)
+ {
+@@ -699,6 +757,7 @@ static int nouveau_drm_probe(struct pci_dev *pdev,
+ 	if (ret)
+ 		goto fail_drm_dev_init;
+ 
++	quirk_broken_nv_runpm(pdev);
+ 	return 0;
+ 
+ fail_drm_dev_init:
+@@ -734,7 +793,11 @@ static void
+ nouveau_drm_remove(struct pci_dev *pdev)
+ {
+ 	struct drm_device *dev = pci_get_drvdata(pdev);
++	struct nouveau_drm *drm = nouveau_drm(dev);
+ 
++	/* revert our workaround */
++	if (drm->old_pm_cap)
++		pdev->pm_cap = drm->old_pm_cap;
+ 	nouveau_drm_device_remove(dev);
+ 	pci_disable_device(pdev);
+ }
+diff --git a/drivers/gpu/drm/nouveau/nouveau_drv.h b/drivers/gpu/drm/nouveau/nouveau_drv.h
+index c2c332fbde97..2a6519737800 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_drv.h
++++ b/drivers/gpu/drm/nouveau/nouveau_drv.h
+@@ -140,6 +140,8 @@ struct nouveau_drm {
+ 
+ 	struct list_head clients;
+ 
++	u8 old_pm_cap;
++
+ 	struct {
+ 		struct agp_bridge_data *bridge;
+ 		u32 base;
+diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c b/drivers/gpu/drm/nouveau/nouveau_svm.c
+index df9bf1fd1bc0..c567526b75b8 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_svm.c
++++ b/drivers/gpu/drm/nouveau/nouveau_svm.c
+@@ -171,6 +171,11 @@ nouveau_svmm_bind(struct drm_device *dev, void *data,
+ 	mm = get_task_mm(current);
+ 	down_read(&mm->mmap_sem);
+ 
++	if (!cli->svm.svmm) {
++		up_read(&mm->mmap_sem);
++		return -EINVAL;
++	}
++
+ 	for (addr = args->va_start, end = args->va_start + size; addr < end;) {
+ 		struct vm_area_struct *vma;
+ 		unsigned long next;
+@@ -179,6 +184,7 @@ nouveau_svmm_bind(struct drm_device *dev, void *data,
+ 		if (!vma)
+ 			break;
+ 
++		addr = max(addr, vma->vm_start);
+ 		next = min(vma->vm_end, end);
+ 		/* This is a best effort so we ignore errors */
+ 		nouveau_dmem_migrate_vma(cli->drm, vma, addr, next);
+diff --git a/drivers/gpu/drm/nouveau/nvkm/engine/gr/gf100.c b/drivers/gpu/drm/nouveau/nvkm/engine/gr/gf100.c
+index dd8f85b8b3a7..f2f5636efac4 100644
+--- a/drivers/gpu/drm/nouveau/nvkm/engine/gr/gf100.c
++++ b/drivers/gpu/drm/nouveau/nvkm/engine/gr/gf100.c
+@@ -1981,8 +1981,34 @@ gf100_gr_init_(struct nvkm_gr *base)
+ {
+ 	struct gf100_gr *gr = gf100_gr(base);
+ 	struct nvkm_subdev *subdev = &base->engine.subdev;
++	struct nvkm_device *device = subdev->device;
++	bool reset = device->chipset == 0x137 || device->chipset == 0x138;
+ 	u32 ret;
+ 
++	/* On certain GP107/GP108 boards, we trigger a weird issue where
++	 * GR will stop responding to PRI accesses after we've asked the
++	 * SEC2 RTOS to boot the GR falcons.  This happens with far more
++	 * frequency when cold-booting a board (ie. returning from D3).
++	 *
++	 * The root cause for this is not known and has proven difficult
++	 * to isolate, with many avenues being dead-ends.
++	 *
++	 * A workaround was discovered by Karol, whereby putting GR into
++	 * reset for an extended period right before initialisation
++	 * prevents the problem from occuring.
++	 *
++	 * XXX: As RM does not require any such workaround, this is more
++	 *      of a hack than a true fix.
++	 */
++	reset = nvkm_boolopt(device->cfgopt, "NvGrResetWar", reset);
++	if (reset) {
++		nvkm_mask(device, 0x000200, 0x00001000, 0x00000000);
++		nvkm_rd32(device, 0x000200);
++		msleep(50);
++		nvkm_mask(device, 0x000200, 0x00001000, 0x00001000);
++		nvkm_rd32(device, 0x000200);
++	}
++
+ 	nvkm_pmu_pgob(gr->base.engine.subdev.device->pmu, false);
+ 
+ 	ret = nvkm_falcon_get(&gr->fecs.falcon, subdev);
+diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
+index 5df596fb0280..fe420ca454e0 100644
+--- a/drivers/gpu/drm/ttm/ttm_bo.c
++++ b/drivers/gpu/drm/ttm/ttm_bo.c
+@@ -498,8 +498,10 @@ static void ttm_bo_cleanup_refs_or_queue(struct ttm_buffer_object *bo)
+ 
+ 		dma_resv_unlock(bo->base.resv);
+ 	}
+-	if (bo->base.resv != &bo->base._resv)
++	if (bo->base.resv != &bo->base._resv) {
++		ttm_bo_flush_all_fences(bo);
+ 		dma_resv_unlock(&bo->base._resv);
++	}
+ 
+ error:
+ 	kref_get(&bo->list_kref);
+diff --git a/drivers/gpu/drm/vc4/vc4_hdmi.c b/drivers/gpu/drm/vc4/vc4_hdmi.c
+index cea18dc15f77..340719238753 100644
+--- a/drivers/gpu/drm/vc4/vc4_hdmi.c
++++ b/drivers/gpu/drm/vc4/vc4_hdmi.c
+@@ -681,11 +681,23 @@ static enum drm_mode_status
+ vc4_hdmi_encoder_mode_valid(struct drm_encoder *crtc,
+ 			    const struct drm_display_mode *mode)
+ {
+-	/* HSM clock must be 108% of the pixel clock.  Additionally,
+-	 * the AXI clock needs to be at least 25% of pixel clock, but
+-	 * HSM ends up being the limiting factor.
++	/*
++	 * As stated in RPi's vc4 firmware "HDMI state machine (HSM) clock must
++	 * be faster than pixel clock, infinitesimally faster, tested in
++	 * simulation. Otherwise, exact value is unimportant for HDMI
++	 * operation." This conflicts with bcm2835's vc4 documentation, which
++	 * states HSM's clock has to be at least 108% of the pixel clock.
++	 *
++	 * Real life tests reveal that vc4's firmware statement holds up, and
++	 * users are able to use pixel clocks closer to HSM's, namely for
++	 * 1920x1200@60Hz. So it was decided to have leave a 1% margin between
++	 * both clocks. Which, for RPi0-3 implies a maximum pixel clock of
++	 * 162MHz.
++	 *
++	 * Additionally, the AXI clock needs to be at least 25% of
++	 * pixel clock, but HSM ends up being the limiting factor.
+ 	 */
+-	if (mode->clock > HSM_CLOCK_FREQ / (1000 * 108 / 100))
++	if (mode->clock > HSM_CLOCK_FREQ / (1000 * 101 / 100))
+ 		return MODE_CLOCK_HIGH;
+ 
+ 	return MODE_OK;
+diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
+index 0370364169c4..501c43c5851d 100644
+--- a/drivers/hv/channel_mgmt.c
++++ b/drivers/hv/channel_mgmt.c
+@@ -839,6 +839,9 @@ void vmbus_initiate_unload(bool crash)
+ {
+ 	struct vmbus_channel_message_header hdr;
+ 
++	if (xchg(&vmbus_connection.conn_state, DISCONNECTED) == DISCONNECTED)
++		return;
++
+ 	/* Pre-Win2012R2 hosts don't support reconnect */
+ 	if (vmbus_proto_version < VERSION_WIN8_1)
+ 		return;
+diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
+index 029378c27421..a68bce4d0ddb 100644
+--- a/drivers/hv/vmbus_drv.c
++++ b/drivers/hv/vmbus_drv.c
+@@ -31,6 +31,7 @@
+ #include <linux/kdebug.h>
+ #include <linux/efi.h>
+ #include <linux/random.h>
++#include <linux/kernel.h>
+ #include <linux/syscore_ops.h>
+ #include <clocksource/hyperv_timer.h>
+ #include "hyperv_vmbus.h"
+@@ -48,14 +49,35 @@ static int hyperv_cpuhp_online;
+ 
+ static void *hv_panic_page;
+ 
++/*
++ * Boolean to control whether to report panic messages over Hyper-V.
++ *
++ * It can be set via /proc/sys/kernel/hyperv/record_panic_msg
++ */
++static int sysctl_record_panic_msg = 1;
++
++static int hyperv_report_reg(void)
++{
++	return !sysctl_record_panic_msg || !hv_panic_page;
++}
++
+ static int hyperv_panic_event(struct notifier_block *nb, unsigned long val,
+ 			      void *args)
+ {
+ 	struct pt_regs *regs;
+ 
+-	regs = current_pt_regs();
++	vmbus_initiate_unload(true);
+ 
+-	hyperv_report_panic(regs, val);
++	/*
++	 * Hyper-V should be notified only once about a panic.  If we will be
++	 * doing hyperv_report_panic_msg() later with kmsg data, don't do
++	 * the notification here.
++	 */
++	if (ms_hyperv.misc_features & HV_FEATURE_GUEST_CRASH_MSR_AVAILABLE
++	    && hyperv_report_reg()) {
++		regs = current_pt_regs();
++		hyperv_report_panic(regs, val, false);
++	}
+ 	return NOTIFY_DONE;
+ }
+ 
+@@ -65,7 +87,13 @@ static int hyperv_die_event(struct notifier_block *nb, unsigned long val,
+ 	struct die_args *die = (struct die_args *)args;
+ 	struct pt_regs *regs = die->regs;
+ 
+-	hyperv_report_panic(regs, val);
++	/*
++	 * Hyper-V should be notified only once about a panic.  If we will be
++	 * doing hyperv_report_panic_msg() later with kmsg data, don't do
++	 * the notification here.
++	 */
++	if (hyperv_report_reg())
++		hyperv_report_panic(regs, val, true);
+ 	return NOTIFY_DONE;
+ }
+ 
+@@ -1252,13 +1280,6 @@ static void vmbus_isr(void)
+ 	add_interrupt_randomness(HYPERVISOR_CALLBACK_VECTOR, 0);
+ }
+ 
+-/*
+- * Boolean to control whether to report panic messages over Hyper-V.
+- *
+- * It can be set via /proc/sys/kernel/hyperv/record_panic_msg
+- */
+-static int sysctl_record_panic_msg = 1;
+-
+ /*
+  * Callback from kmsg_dump. Grab as much as possible from the end of the kmsg
+  * buffer and call into Hyper-V to transfer the data.
+@@ -1382,19 +1403,29 @@ static int vmbus_bus_init(void)
+ 			hv_panic_page = (void *)hv_alloc_hyperv_zeroed_page();
+ 			if (hv_panic_page) {
+ 				ret = kmsg_dump_register(&hv_kmsg_dumper);
+-				if (ret)
++				if (ret) {
+ 					pr_err("Hyper-V: kmsg dump register "
+ 						"error 0x%x\n", ret);
++					hv_free_hyperv_page(
++					    (unsigned long)hv_panic_page);
++					hv_panic_page = NULL;
++				}
+ 			} else
+ 				pr_err("Hyper-V: panic message page memory "
+ 					"allocation failed");
+ 		}
+ 
+ 		register_die_notifier(&hyperv_die_block);
+-		atomic_notifier_chain_register(&panic_notifier_list,
+-					       &hyperv_panic_block);
+ 	}
+ 
++	/*
++	 * Always register the panic notifier because we need to unload
++	 * the VMbus channel connection to prevent any VMbus
++	 * activity after the VM panics.
++	 */
++	atomic_notifier_chain_register(&panic_notifier_list,
++			       &hyperv_panic_block);
++
+ 	vmbus_request_offers();
+ 
+ 	return 0;
+@@ -1407,7 +1438,6 @@ err_alloc:
+ 	hv_remove_vmbus_irq();
+ 
+ 	bus_unregister(&hv_bus);
+-	hv_free_hyperv_page((unsigned long)hv_panic_page);
+ 	unregister_sysctl_table(hv_ctl_table_hdr);
+ 	hv_ctl_table_hdr = NULL;
+ 	return ret;
+@@ -2204,8 +2234,6 @@ static int vmbus_bus_suspend(struct device *dev)
+ 
+ 	vmbus_initiate_unload(false);
+ 
+-	vmbus_connection.conn_state = DISCONNECTED;
+-
+ 	/* Reset the event for the next resume. */
+ 	reinit_completion(&vmbus_connection.ready_for_resume_event);
+ 
+@@ -2289,7 +2317,6 @@ static void hv_kexec_handler(void)
+ {
+ 	hv_stimer_global_cleanup();
+ 	vmbus_initiate_unload(false);
+-	vmbus_connection.conn_state = DISCONNECTED;
+ 	/* Make sure conn_state is set as hv_synic_cleanup checks for it */
+ 	mb();
+ 	cpuhp_remove_state(hyperv_cpuhp_online);
+@@ -2306,7 +2333,6 @@ static void hv_crash_handler(struct pt_regs *regs)
+ 	 * doing the cleanup for current CPU only. This should be sufficient
+ 	 * for kdump.
+ 	 */
+-	vmbus_connection.conn_state = DISCONNECTED;
+ 	cpu = smp_processor_id();
+ 	hv_stimer_cleanup(cpu);
+ 	hv_synic_disable_regs(cpu);
+diff --git a/drivers/iio/common/st_sensors/st_sensors_core.c b/drivers/iio/common/st_sensors/st_sensors_core.c
+index e051edbc43c1..0e35ff06f9af 100644
+--- a/drivers/iio/common/st_sensors/st_sensors_core.c
++++ b/drivers/iio/common/st_sensors/st_sensors_core.c
+@@ -328,6 +328,8 @@ static struct st_sensors_platform_data *st_sensors_dev_probe(struct device *dev,
+ 		return NULL;
+ 
+ 	pdata = devm_kzalloc(dev, sizeof(*pdata), GFP_KERNEL);
++	if (!pdata)
++		return ERR_PTR(-ENOMEM);
+ 	if (!device_property_read_u32(dev, "st,drdy-int-pin", &val) && (val <= 2))
+ 		pdata->drdy_int_pin = (u8) val;
+ 	else
+@@ -371,6 +373,8 @@ int st_sensors_init_sensor(struct iio_dev *indio_dev,
+ 
+ 	/* If OF/DT pdata exists, it will take precedence of anything else */
+ 	of_pdata = st_sensors_dev_probe(indio_dev->dev.parent, pdata);
++	if (IS_ERR(of_pdata))
++		return PTR_ERR(of_pdata);
+ 	if (of_pdata)
+ 		pdata = of_pdata;
+ 
+diff --git a/drivers/iio/light/si1133.c b/drivers/iio/light/si1133.c
+index 015a21f0c2ef..9174ab928880 100644
+--- a/drivers/iio/light/si1133.c
++++ b/drivers/iio/light/si1133.c
+@@ -102,6 +102,9 @@
+ #define SI1133_INPUT_FRACTION_LOW	15
+ #define SI1133_LUX_OUTPUT_FRACTION	12
+ #define SI1133_LUX_BUFFER_SIZE		9
++#define SI1133_MEASURE_BUFFER_SIZE	3
++
++#define SI1133_SIGN_BIT_INDEX 23
+ 
+ static const int si1133_scale_available[] = {
+ 	1, 2, 4, 8, 16, 32, 64, 128};
+@@ -234,13 +237,13 @@ static const struct si1133_lux_coeff lux_coeff = {
+ 	}
+ };
+ 
+-static int si1133_calculate_polynomial_inner(u32 input, u8 fraction, u16 mag,
++static int si1133_calculate_polynomial_inner(s32 input, u8 fraction, u16 mag,
+ 					     s8 shift)
+ {
+ 	return ((input << fraction) / mag) << shift;
+ }
+ 
+-static int si1133_calculate_output(u32 x, u32 y, u8 x_order, u8 y_order,
++static int si1133_calculate_output(s32 x, s32 y, u8 x_order, u8 y_order,
+ 				   u8 input_fraction, s8 sign,
+ 				   const struct si1133_coeff *coeffs)
+ {
+@@ -276,7 +279,7 @@ static int si1133_calculate_output(u32 x, u32 y, u8 x_order, u8 y_order,
+  * The algorithm is from:
+  * https://siliconlabs.github.io/Gecko_SDK_Doc/efm32zg/html/si1133_8c_source.html#l00716
+  */
+-static int si1133_calc_polynomial(u32 x, u32 y, u8 input_fraction, u8 num_coeff,
++static int si1133_calc_polynomial(s32 x, s32 y, u8 input_fraction, u8 num_coeff,
+ 				  const struct si1133_coeff *coeffs)
+ {
+ 	u8 x_order, y_order;
+@@ -614,7 +617,7 @@ static int si1133_measure(struct si1133_data *data,
+ {
+ 	int err;
+ 
+-	__be16 resp;
++	u8 buffer[SI1133_MEASURE_BUFFER_SIZE];
+ 
+ 	err = si1133_set_adcmux(data, 0, chan->channel);
+ 	if (err)
+@@ -625,12 +628,13 @@ static int si1133_measure(struct si1133_data *data,
+ 	if (err)
+ 		return err;
+ 
+-	err = si1133_bulk_read(data, SI1133_REG_HOSTOUT(0), sizeof(resp),
+-			       (u8 *)&resp);
++	err = si1133_bulk_read(data, SI1133_REG_HOSTOUT(0), sizeof(buffer),
++			       buffer);
+ 	if (err)
+ 		return err;
+ 
+-	*val = be16_to_cpu(resp);
++	*val = sign_extend32((buffer[0] << 16) | (buffer[1] << 8) | buffer[2],
++			     SI1133_SIGN_BIT_INDEX);
+ 
+ 	return err;
+ }
+@@ -704,9 +708,9 @@ static int si1133_get_lux(struct si1133_data *data, int *val)
+ {
+ 	int err;
+ 	int lux;
+-	u32 high_vis;
+-	u32 low_vis;
+-	u32 ir;
++	s32 high_vis;
++	s32 low_vis;
++	s32 ir;
+ 	u8 buffer[SI1133_LUX_BUFFER_SIZE];
+ 
+ 	/* Activate lux channels */
+@@ -719,9 +723,16 @@ static int si1133_get_lux(struct si1133_data *data, int *val)
+ 	if (err)
+ 		return err;
+ 
+-	high_vis = (buffer[0] << 16) | (buffer[1] << 8) | buffer[2];
+-	low_vis = (buffer[3] << 16) | (buffer[4] << 8) | buffer[5];
+-	ir = (buffer[6] << 16) | (buffer[7] << 8) | buffer[8];
++	high_vis =
++		sign_extend32((buffer[0] << 16) | (buffer[1] << 8) | buffer[2],
++			      SI1133_SIGN_BIT_INDEX);
++
++	low_vis =
++		sign_extend32((buffer[3] << 16) | (buffer[4] << 8) | buffer[5],
++			      SI1133_SIGN_BIT_INDEX);
++
++	ir = sign_extend32((buffer[6] << 16) | (buffer[7] << 8) | buffer[8],
++			   SI1133_SIGN_BIT_INDEX);
+ 
+ 	if (high_vis > SI1133_ADC_THRESHOLD || ir > SI1133_ADC_THRESHOLD)
+ 		lux = si1133_calc_polynomial(high_vis, ir,
+diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
+index d2fade984999..25149544d57c 100644
+--- a/drivers/iommu/Kconfig
++++ b/drivers/iommu/Kconfig
+@@ -188,6 +188,7 @@ config INTEL_IOMMU
+ 	select NEED_DMA_MAP_STATE
+ 	select DMAR_TABLE
+ 	select SWIOTLB
++	select IOASID
+ 	help
+ 	  DMA remapping (DMAR) devices support enables independent address
+ 	  translations for Direct Memory Access (DMA) from devices.
+diff --git a/drivers/iommu/amd_iommu_types.h b/drivers/iommu/amd_iommu_types.h
+index f8d01d6b00da..ca8c4522045b 100644
+--- a/drivers/iommu/amd_iommu_types.h
++++ b/drivers/iommu/amd_iommu_types.h
+@@ -348,7 +348,7 @@
+ 
+ #define DTE_GCR3_VAL_A(x)	(((x) >> 12) & 0x00007ULL)
+ #define DTE_GCR3_VAL_B(x)	(((x) >> 15) & 0x0ffffULL)
+-#define DTE_GCR3_VAL_C(x)	(((x) >> 31) & 0xfffffULL)
++#define DTE_GCR3_VAL_C(x)	(((x) >> 31) & 0x1fffffULL)
+ 
+ #define DTE_GCR3_INDEX_A	0
+ #define DTE_GCR3_INDEX_B	1
+diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
+index 4be549478691..ef0a5246700e 100644
+--- a/drivers/iommu/intel-iommu.c
++++ b/drivers/iommu/intel-iommu.c
+@@ -4501,7 +4501,8 @@ static struct dmar_atsr_unit *dmar_find_atsr(struct acpi_dmar_atsr *atsr)
+ 	struct dmar_atsr_unit *atsru;
+ 	struct acpi_dmar_atsr *tmp;
+ 
+-	list_for_each_entry_rcu(atsru, &dmar_atsr_units, list) {
++	list_for_each_entry_rcu(atsru, &dmar_atsr_units, list,
++				dmar_rcu_check()) {
+ 		tmp = (struct acpi_dmar_atsr *)atsru->hdr;
+ 		if (atsr->segment != tmp->segment)
+ 			continue;
+diff --git a/drivers/iommu/intel-svm.c b/drivers/iommu/intel-svm.c
+index d7f2a5358900..2998418f0a38 100644
+--- a/drivers/iommu/intel-svm.c
++++ b/drivers/iommu/intel-svm.c
+@@ -531,7 +531,7 @@ struct page_req_dsc {
+ 	u64 priv_data[2];
+ };
+ 
+-#define PRQ_RING_MASK ((0x1000 << PRQ_ORDER) - 0x10)
++#define PRQ_RING_MASK	((0x1000 << PRQ_ORDER) - 0x20)
+ 
+ static bool access_error(struct vm_area_struct *vma, struct page_req_dsc *req)
+ {
+@@ -611,14 +611,15 @@ static irqreturn_t prq_event_thread(int irq, void *d)
+ 		 * any faults on kernel addresses. */
+ 		if (!svm->mm)
+ 			goto bad_req;
+-		/* If the mm is already defunct, don't handle faults. */
+-		if (!mmget_not_zero(svm->mm))
+-			goto bad_req;
+ 
+ 		/* If address is not canonical, return invalid response */
+ 		if (!is_canonical_address(address))
+ 			goto bad_req;
+ 
++		/* If the mm is already defunct, don't handle faults. */
++		if (!mmget_not_zero(svm->mm))
++			goto bad_req;
++
+ 		down_read(&svm->mm->mmap_sem);
+ 		vma = find_extend_vma(svm->mm, address);
+ 		if (!vma || address < vma->vm_start)
+diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
+index cce329d71fba..5eed75cd121f 100644
+--- a/drivers/iommu/virtio-iommu.c
++++ b/drivers/iommu/virtio-iommu.c
+@@ -613,18 +613,20 @@ static int viommu_domain_finalise(struct viommu_dev *viommu,
+ 	int ret;
+ 	struct viommu_domain *vdomain = to_viommu_domain(domain);
+ 
+-	vdomain->viommu		= viommu;
+-	vdomain->map_flags	= viommu->map_flags;
++	ret = ida_alloc_range(&viommu->domain_ids, viommu->first_domain,
++			      viommu->last_domain, GFP_KERNEL);
++	if (ret < 0)
++		return ret;
++
++	vdomain->id		= (unsigned int)ret;
+ 
+ 	domain->pgsize_bitmap	= viommu->pgsize_bitmap;
+ 	domain->geometry	= viommu->geometry;
+ 
+-	ret = ida_alloc_range(&viommu->domain_ids, viommu->first_domain,
+-			      viommu->last_domain, GFP_KERNEL);
+-	if (ret >= 0)
+-		vdomain->id = (unsigned int)ret;
++	vdomain->map_flags	= viommu->map_flags;
++	vdomain->viommu		= viommu;
+ 
+-	return ret > 0 ? 0 : ret;
++	return 0;
+ }
+ 
+ static void viommu_domain_free(struct iommu_domain *domain)
+diff --git a/drivers/irqchip/irq-mbigen.c b/drivers/irqchip/irq-mbigen.c
+index 6b566bba263b..ff7627b57772 100644
+--- a/drivers/irqchip/irq-mbigen.c
++++ b/drivers/irqchip/irq-mbigen.c
+@@ -220,10 +220,16 @@ static int mbigen_irq_domain_alloc(struct irq_domain *domain,
+ 	return 0;
+ }
+ 
++static void mbigen_irq_domain_free(struct irq_domain *domain, unsigned int virq,
++				   unsigned int nr_irqs)
++{
++	platform_msi_domain_free(domain, virq, nr_irqs);
++}
++
+ static const struct irq_domain_ops mbigen_domain_ops = {
+ 	.translate	= mbigen_domain_translate,
+ 	.alloc		= mbigen_irq_domain_alloc,
+-	.free		= irq_domain_free_irqs_common,
++	.free		= mbigen_irq_domain_free,
+ };
+ 
+ static int mbigen_of_create_domain(struct platform_device *pdev,
+diff --git a/drivers/leds/led-class.c b/drivers/leds/led-class.c
+index 1fc40e8af75e..3363a6551a70 100644
+--- a/drivers/leds/led-class.c
++++ b/drivers/leds/led-class.c
+@@ -376,7 +376,7 @@ int led_classdev_register_ext(struct device *parent,
+ 
+ 	if (ret)
+ 		dev_warn(parent, "Led %s renamed to %s due to name collision",
+-				led_cdev->name, dev_name(led_cdev->dev));
++				proposed_name, dev_name(led_cdev->dev));
+ 
+ 	if (led_cdev->flags & LED_BRIGHT_HW_CHANGED) {
+ 		ret = led_add_brightness_hw_changed(led_cdev);
+diff --git a/drivers/memory/tegra/tegra124-emc.c b/drivers/memory/tegra/tegra124-emc.c
+index 21f05240682b..33b8216bac30 100644
+--- a/drivers/memory/tegra/tegra124-emc.c
++++ b/drivers/memory/tegra/tegra124-emc.c
+@@ -1158,6 +1158,11 @@ static void emc_debugfs_init(struct device *dev, struct tegra_emc *emc)
+ 			emc->debugfs.max_rate = emc->timings[i].rate;
+ 	}
+ 
++	if (!emc->num_timings) {
++		emc->debugfs.min_rate = clk_get_rate(emc->clk);
++		emc->debugfs.max_rate = emc->debugfs.min_rate;
++	}
++
+ 	err = clk_set_rate_range(emc->clk, emc->debugfs.min_rate,
+ 				 emc->debugfs.max_rate);
+ 	if (err < 0) {
+diff --git a/drivers/memory/tegra/tegra20-emc.c b/drivers/memory/tegra/tegra20-emc.c
+index 8ae474d9bfb9..b16715e9515d 100644
+--- a/drivers/memory/tegra/tegra20-emc.c
++++ b/drivers/memory/tegra/tegra20-emc.c
+@@ -628,6 +628,11 @@ static void tegra_emc_debugfs_init(struct tegra_emc *emc)
+ 			emc->debugfs.max_rate = emc->timings[i].rate;
+ 	}
+ 
++	if (!emc->num_timings) {
++		emc->debugfs.min_rate = clk_get_rate(emc->clk);
++		emc->debugfs.max_rate = emc->debugfs.min_rate;
++	}
++
+ 	err = clk_set_rate_range(emc->clk, emc->debugfs.min_rate,
+ 				 emc->debugfs.max_rate);
+ 	if (err < 0) {
+diff --git a/drivers/memory/tegra/tegra30-emc.c b/drivers/memory/tegra/tegra30-emc.c
+index e3efd9529506..b42bdb667e85 100644
+--- a/drivers/memory/tegra/tegra30-emc.c
++++ b/drivers/memory/tegra/tegra30-emc.c
+@@ -1256,6 +1256,11 @@ static void tegra_emc_debugfs_init(struct tegra_emc *emc)
+ 			emc->debugfs.max_rate = emc->timings[i].rate;
+ 	}
+ 
++	if (!emc->num_timings) {
++		emc->debugfs.min_rate = clk_get_rate(emc->clk);
++		emc->debugfs.max_rate = emc->debugfs.min_rate;
++	}
++
+ 	err = clk_set_rate_range(emc->clk, emc->debugfs.min_rate,
+ 				 emc->debugfs.max_rate);
+ 	if (err < 0) {
+diff --git a/drivers/mfd/cros_ec_dev.c b/drivers/mfd/cros_ec_dev.c
+index 39e611695053..32c2b912b58b 100644
+--- a/drivers/mfd/cros_ec_dev.c
++++ b/drivers/mfd/cros_ec_dev.c
+@@ -211,7 +211,7 @@ static int ec_device_probe(struct platform_device *pdev)
+ 	 * explicitly added on platforms that don't have the PD notifier ACPI
+ 	 * device entry defined.
+ 	 */
+-	if (IS_ENABLED(CONFIG_OF)) {
++	if (IS_ENABLED(CONFIG_OF) && ec->ec_dev->dev->of_node) {
+ 		if (cros_ec_check_features(ec, EC_FEATURE_USB_PD)) {
+ 			retval = mfd_add_hotplug_devices(ec->dev,
+ 					cros_usbpd_notify_cells,
+diff --git a/drivers/mtd/devices/phram.c b/drivers/mtd/devices/phram.c
+index 931e5c2481b5..b50ec7ecd10c 100644
+--- a/drivers/mtd/devices/phram.c
++++ b/drivers/mtd/devices/phram.c
+@@ -243,22 +243,25 @@ static int phram_setup(const char *val)
+ 
+ 	ret = parse_num64(&start, token[1]);
+ 	if (ret) {
+-		kfree(name);
+ 		parse_err("illegal start address\n");
++		goto error;
+ 	}
+ 
+ 	ret = parse_num64(&len, token[2]);
+ 	if (ret) {
+-		kfree(name);
+ 		parse_err("illegal device length\n");
++		goto error;
+ 	}
+ 
+ 	ret = register_device(name, start, len);
+-	if (!ret)
+-		pr_info("%s device: %#llx at %#llx\n", name, len, start);
+-	else
+-		kfree(name);
++	if (ret)
++		goto error;
++
++	pr_info("%s device: %#llx at %#llx\n", name, len, start);
++	return 0;
+ 
++error:
++	kfree(name);
+ 	return ret;
+ }
+ 
+diff --git a/drivers/mtd/lpddr/lpddr_cmds.c b/drivers/mtd/lpddr/lpddr_cmds.c
+index 1efc643c9871..9341a8a592e8 100644
+--- a/drivers/mtd/lpddr/lpddr_cmds.c
++++ b/drivers/mtd/lpddr/lpddr_cmds.c
+@@ -68,7 +68,6 @@ struct mtd_info *lpddr_cmdset(struct map_info *map)
+ 	shared = kmalloc_array(lpddr->numchips, sizeof(struct flchip_shared),
+ 						GFP_KERNEL);
+ 	if (!shared) {
+-		kfree(lpddr);
+ 		kfree(mtd);
+ 		return NULL;
+ 	}
+diff --git a/drivers/mtd/nand/raw/nand_base.c b/drivers/mtd/nand/raw/nand_base.c
+index f64e3b6605c6..47c63968fa45 100644
+--- a/drivers/mtd/nand/raw/nand_base.c
++++ b/drivers/mtd/nand/raw/nand_base.c
+@@ -5907,6 +5907,8 @@ void nand_cleanup(struct nand_chip *chip)
+ 	    chip->ecc.algo == NAND_ECC_BCH)
+ 		nand_bch_free((struct nand_bch_control *)chip->ecc.priv);
+ 
++	nanddev_cleanup(&chip->base);
++
+ 	/* Free bad block table memory */
+ 	kfree(chip->bbt);
+ 	kfree(chip->data_buf);
+diff --git a/drivers/mtd/nand/spi/core.c b/drivers/mtd/nand/spi/core.c
+index 5750c45019d8..8dda51bbdd11 100644
+--- a/drivers/mtd/nand/spi/core.c
++++ b/drivers/mtd/nand/spi/core.c
+@@ -609,6 +609,7 @@ static int spinand_markbad(struct nand_device *nand, const struct nand_pos *pos)
+ 		.ooboffs = 0,
+ 		.ooblen = sizeof(marker),
+ 		.oobbuf.out = marker,
++		.mode = MTD_OPS_RAW,
+ 	};
+ 	int ret;
+ 
+diff --git a/drivers/net/dsa/bcm_sf2_cfp.c b/drivers/net/dsa/bcm_sf2_cfp.c
+index 1962c8330daa..f9785027c096 100644
+--- a/drivers/net/dsa/bcm_sf2_cfp.c
++++ b/drivers/net/dsa/bcm_sf2_cfp.c
+@@ -882,17 +882,14 @@ static int bcm_sf2_cfp_rule_set(struct dsa_switch *ds, int port,
+ 	     fs->m_ext.data[1]))
+ 		return -EINVAL;
+ 
+-	if (fs->location != RX_CLS_LOC_ANY && fs->location >= CFP_NUM_RULES)
++	if (fs->location != RX_CLS_LOC_ANY &&
++	    fs->location > bcm_sf2_cfp_rule_size(priv))
+ 		return -EINVAL;
+ 
+ 	if (fs->location != RX_CLS_LOC_ANY &&
+ 	    test_bit(fs->location, priv->cfp.used))
+ 		return -EBUSY;
+ 
+-	if (fs->location != RX_CLS_LOC_ANY &&
+-	    fs->location > bcm_sf2_cfp_rule_size(priv))
+-		return -EINVAL;
+-
+ 	ret = bcm_sf2_cfp_rule_cmp(priv, port, fs);
+ 	if (ret == 0)
+ 		return -EEXIST;
+@@ -973,7 +970,7 @@ static int bcm_sf2_cfp_rule_del(struct bcm_sf2_priv *priv, int port, u32 loc)
+ 	struct cfp_rule *rule;
+ 	int ret;
+ 
+-	if (loc >= CFP_NUM_RULES)
++	if (loc > bcm_sf2_cfp_rule_size(priv))
+ 		return -EINVAL;
+ 
+ 	/* Refuse deleting unused rules, and those that are not unique since
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+index d2cfa247abc8..9710cdecb63a 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+@@ -1535,6 +1535,10 @@ static int mlx5e_set_fecparam(struct net_device *netdev,
+ 	int mode;
+ 	int err;
+ 
++	if (bitmap_weight((unsigned long *)&fecparam->fec,
++			  ETHTOOL_FEC_BASER_BIT + 1) > 1)
++		return -EOPNOTSUPP;
++
+ 	for (mode = 0; mode < ARRAY_SIZE(pplm_fec_2_ethtool); mode++) {
+ 		if (!(pplm_fec_2_ethtool[mode] & fecparam->fec))
+ 			continue;
+diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c
+index 061aada4748a..9b4ae5c36da6 100644
+--- a/drivers/net/macsec.c
++++ b/drivers/net/macsec.c
+@@ -2398,6 +2398,9 @@ static int macsec_upd_offload(struct sk_buff *skb, struct genl_info *info)
+ 		return PTR_ERR(dev);
+ 	macsec = macsec_priv(dev);
+ 
++	if (!tb_offload[MACSEC_OFFLOAD_ATTR_TYPE])
++		return -EINVAL;
++
+ 	offload = nla_get_u8(tb_offload[MACSEC_OFFLOAD_ATTR_TYPE]);
+ 	if (macsec->offload == offload)
+ 		return 0;
+diff --git a/drivers/nvdimm/bus.c b/drivers/nvdimm/bus.c
+index a8b515968569..09087c38fabd 100644
+--- a/drivers/nvdimm/bus.c
++++ b/drivers/nvdimm/bus.c
+@@ -1042,8 +1042,10 @@ static int __nd_ioctl(struct nvdimm_bus *nvdimm_bus, struct nvdimm *nvdimm,
+ 			return -EFAULT;
+ 	}
+ 
+-	if (!desc || (desc->out_num + desc->in_num == 0) ||
+-			!test_bit(cmd, &cmd_mask))
++	if (!desc ||
++	    (desc->out_num + desc->in_num == 0) ||
++	    cmd > ND_CMD_CALL ||
++	    !test_bit(cmd, &cmd_mask))
+ 		return -ENOTTY;
+ 
+ 	/* fail write commands (when read-only) */
+diff --git a/drivers/of/overlay.c b/drivers/of/overlay.c
+index c9219fddf44b..50bbe0edf538 100644
+--- a/drivers/of/overlay.c
++++ b/drivers/of/overlay.c
+@@ -261,6 +261,8 @@ static struct property *dup_and_fixup_symbol_prop(
+ 
+ 	of_property_set_flag(new_prop, OF_DYNAMIC);
+ 
++	kfree(target_path);
++
+ 	return new_prop;
+ 
+ err_free_new_prop:
+diff --git a/drivers/of/unittest.c b/drivers/of/unittest.c
+index 68b87587b2ef..7199aaafd304 100644
+--- a/drivers/of/unittest.c
++++ b/drivers/of/unittest.c
+@@ -777,6 +777,10 @@ static void __init of_unittest_changeset(void)
+ 	unittest(!of_changeset_revert(&chgset), "revert failed\n");
+ 
+ 	of_changeset_destroy(&chgset);
++
++	of_node_put(n1);
++	of_node_put(n2);
++	of_node_put(n21);
+ #endif
+ }
+ 
+@@ -1151,10 +1155,13 @@ static void __init of_unittest_platform_populate(void)
+ 
+ 	of_platform_populate(np, match, NULL, &test_bus->dev);
+ 	for_each_child_of_node(np, child) {
+-		for_each_child_of_node(child, grandchild)
+-			unittest(of_find_device_by_node(grandchild),
++		for_each_child_of_node(child, grandchild) {
++			pdev = of_find_device_by_node(grandchild);
++			unittest(pdev,
+ 				 "Could not create device for node '%pOFn'\n",
+ 				 grandchild);
++			of_dev_put(pdev);
++		}
+ 	}
+ 
+ 	of_platform_depopulate(&test_bus->dev);
+@@ -2564,8 +2571,11 @@ static __init void of_unittest_overlay_high_level(void)
+ 				goto err_unlock;
+ 			}
+ 			if (__of_add_property(of_symbols, new_prop)) {
++				kfree(new_prop->name);
++				kfree(new_prop->value);
++				kfree(new_prop);
+ 				/* "name" auto-generated by unflatten */
+-				if (!strcmp(new_prop->name, "name"))
++				if (!strcmp(prop->name, "name"))
+ 					continue;
+ 				unittest(0, "duplicate property '%s' in overlay_base node __symbols__",
+ 					 prop->name);
+diff --git a/drivers/phy/socionext/phy-uniphier-usb3ss.c b/drivers/phy/socionext/phy-uniphier-usb3ss.c
+index ec231e40ef2a..a7577e316baf 100644
+--- a/drivers/phy/socionext/phy-uniphier-usb3ss.c
++++ b/drivers/phy/socionext/phy-uniphier-usb3ss.c
+@@ -314,6 +314,10 @@ static const struct of_device_id uniphier_u3ssphy_match[] = {
+ 		.compatible = "socionext,uniphier-pro4-usb3-ssphy",
+ 		.data = &uniphier_pro4_data,
+ 	},
++	{
++		.compatible = "socionext,uniphier-pro5-usb3-ssphy",
++		.data = &uniphier_pro4_data,
++	},
+ 	{
+ 		.compatible = "socionext,uniphier-pxs2-usb3-ssphy",
+ 		.data = &uniphier_pxs2_data,
+diff --git a/drivers/platform/chrome/cros_ec.c b/drivers/platform/chrome/cros_ec.c
+index 6fc8f2c3ac51..7ee43b2e0654 100644
+--- a/drivers/platform/chrome/cros_ec.c
++++ b/drivers/platform/chrome/cros_ec.c
+@@ -138,6 +138,24 @@ static int cros_ec_sleep_event(struct cros_ec_device *ec_dev, u8 sleep_event)
+ 	return ret;
+ }
+ 
++static int cros_ec_ready_event(struct notifier_block *nb,
++			       unsigned long queued_during_suspend,
++			       void *_notify)
++{
++	struct cros_ec_device *ec_dev = container_of(nb, struct cros_ec_device,
++						     notifier_ready);
++	u32 host_event = cros_ec_get_host_event(ec_dev);
++
++	if (host_event & EC_HOST_EVENT_MASK(EC_HOST_EVENT_INTERFACE_READY)) {
++		mutex_lock(&ec_dev->lock);
++		cros_ec_query_all(ec_dev);
++		mutex_unlock(&ec_dev->lock);
++		return NOTIFY_OK;
++	}
++
++	return NOTIFY_DONE;
++}
++
+ /**
+  * cros_ec_register() - Register a new ChromeOS EC, using the provided info.
+  * @ec_dev: Device to register.
+@@ -237,6 +255,18 @@ int cros_ec_register(struct cros_ec_device *ec_dev)
+ 		dev_dbg(ec_dev->dev, "Error %d clearing sleep event to ec",
+ 			err);
+ 
++	if (ec_dev->mkbp_event_supported) {
++		/*
++		 * Register the notifier for EC_HOST_EVENT_INTERFACE_READY
++		 * event.
++		 */
++		ec_dev->notifier_ready.notifier_call = cros_ec_ready_event;
++		err = blocking_notifier_chain_register(&ec_dev->event_notifier,
++						      &ec_dev->notifier_ready);
++		if (err)
++			return err;
++	}
++
+ 	dev_info(dev, "Chrome EC device registered\n");
+ 
+ 	return 0;
+diff --git a/drivers/platform/x86/intel-hid.c b/drivers/platform/x86/intel-hid.c
+index 43d590250228..9c0e6e0fabdf 100644
+--- a/drivers/platform/x86/intel-hid.c
++++ b/drivers/platform/x86/intel-hid.c
+@@ -19,8 +19,8 @@ MODULE_LICENSE("GPL");
+ MODULE_AUTHOR("Alex Hung");
+ 
+ static const struct acpi_device_id intel_hid_ids[] = {
+-	{"INT1051", 0},
+ 	{"INT33D5", 0},
++	{"INTC1051", 0},
+ 	{"", 0},
+ };
+ 
+diff --git a/drivers/power/supply/axp288_fuel_gauge.c b/drivers/power/supply/axp288_fuel_gauge.c
+index e1bc4e6e6f30..f40fa0e63b6e 100644
+--- a/drivers/power/supply/axp288_fuel_gauge.c
++++ b/drivers/power/supply/axp288_fuel_gauge.c
+@@ -706,14 +706,14 @@ static const struct dmi_system_id axp288_fuel_gauge_blacklist[] = {
+ 	{
+ 		/* Intel Cherry Trail Compute Stick, Windows version */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Intel Corporation"),
++			DMI_MATCH(DMI_SYS_VENDOR, "Intel"),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "STK1AW32SC"),
+ 		},
+ 	},
+ 	{
+ 		/* Intel Cherry Trail Compute Stick, version without an OS */
+ 		.matches = {
+-			DMI_MATCH(DMI_SYS_VENDOR, "Intel Corporation"),
++			DMI_MATCH(DMI_SYS_VENDOR, "Intel"),
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "STK1A32SC"),
+ 		},
+ 	},
+diff --git a/drivers/power/supply/bq27xxx_battery.c b/drivers/power/supply/bq27xxx_battery.c
+index 195c18c2f426..664e50103eaa 100644
+--- a/drivers/power/supply/bq27xxx_battery.c
++++ b/drivers/power/supply/bq27xxx_battery.c
+@@ -1885,7 +1885,10 @@ int bq27xxx_battery_setup(struct bq27xxx_device_info *di)
+ 
+ 	di->bat = power_supply_register_no_ws(di->dev, psy_desc, &psy_cfg);
+ 	if (IS_ERR(di->bat)) {
+-		dev_err(di->dev, "failed to register battery\n");
++		if (PTR_ERR(di->bat) == -EPROBE_DEFER)
++			dev_dbg(di->dev, "failed to register battery, deferring probe\n");
++		else
++			dev_err(di->dev, "failed to register battery\n");
+ 		return PTR_ERR(di->bat);
+ 	}
+ 
+diff --git a/drivers/rtc/rtc-88pm860x.c b/drivers/rtc/rtc-88pm860x.c
+index 4743b16a8d84..1526402e126b 100644
+--- a/drivers/rtc/rtc-88pm860x.c
++++ b/drivers/rtc/rtc-88pm860x.c
+@@ -336,6 +336,10 @@ static int pm860x_rtc_probe(struct platform_device *pdev)
+ 	info->dev = &pdev->dev;
+ 	dev_set_drvdata(&pdev->dev, info);
+ 
++	info->rtc_dev = devm_rtc_allocate_device(&pdev->dev);
++	if (IS_ERR(info->rtc_dev))
++		return PTR_ERR(info->rtc_dev);
++
+ 	ret = devm_request_threaded_irq(&pdev->dev, info->irq, NULL,
+ 					rtc_update_handler, IRQF_ONESHOT, "rtc",
+ 					info);
+@@ -377,13 +381,11 @@ static int pm860x_rtc_probe(struct platform_device *pdev)
+ 		}
+ 	}
+ 
+-	info->rtc_dev = devm_rtc_device_register(&pdev->dev, "88pm860x-rtc",
+-					    &pm860x_rtc_ops, THIS_MODULE);
+-	ret = PTR_ERR(info->rtc_dev);
+-	if (IS_ERR(info->rtc_dev)) {
+-		dev_err(&pdev->dev, "Failed to register RTC device: %d\n", ret);
++	info->rtc_dev->ops = &pm860x_rtc_ops;
++
++	ret = rtc_register_device(info->rtc_dev);
++	if (ret)
+ 		return ret;
+-	}
+ 
+ 	/*
+ 	 * enable internal XO instead of internal 3.25MHz clock since it can
+diff --git a/drivers/scsi/sg.c b/drivers/scsi/sg.c
+index 4e6af592f018..9c0ee192f0f9 100644
+--- a/drivers/scsi/sg.c
++++ b/drivers/scsi/sg.c
+@@ -793,8 +793,10 @@ sg_common_write(Sg_fd * sfp, Sg_request * srp,
+ 			"sg_common_write:  scsi opcode=0x%02x, cmd_size=%d\n",
+ 			(int) cmnd[0], (int) hp->cmd_len));
+ 
+-	if (hp->dxfer_len >= SZ_256M)
++	if (hp->dxfer_len >= SZ_256M) {
++		sg_remove_request(sfp, srp);
+ 		return -EINVAL;
++	}
+ 
+ 	k = sg_start_req(srp, cmnd);
+ 	if (k) {
+diff --git a/drivers/soc/imx/gpc.c b/drivers/soc/imx/gpc.c
+index 98b9d9a902ae..90a8b2c0676f 100644
+--- a/drivers/soc/imx/gpc.c
++++ b/drivers/soc/imx/gpc.c
+@@ -87,8 +87,8 @@ static int imx6_pm_domain_power_off(struct generic_pm_domain *genpd)
+ static int imx6_pm_domain_power_on(struct generic_pm_domain *genpd)
+ {
+ 	struct imx_pm_domain *pd = to_imx_pm_domain(genpd);
+-	int i, ret, sw, sw2iso;
+-	u32 val;
++	int i, ret;
++	u32 val, req;
+ 
+ 	if (pd->supply) {
+ 		ret = regulator_enable(pd->supply);
+@@ -107,17 +107,18 @@ static int imx6_pm_domain_power_on(struct generic_pm_domain *genpd)
+ 	regmap_update_bits(pd->regmap, pd->reg_offs + GPC_PGC_CTRL_OFFS,
+ 			   0x1, 0x1);
+ 
+-	/* Read ISO and ISO2SW power up delays */
+-	regmap_read(pd->regmap, pd->reg_offs + GPC_PGC_PUPSCR_OFFS, &val);
+-	sw = val & 0x3f;
+-	sw2iso = (val >> 8) & 0x3f;
+-
+ 	/* Request GPC to power up domain */
+-	val = BIT(pd->cntr_pdn_bit + 1);
+-	regmap_update_bits(pd->regmap, GPC_CNTR, val, val);
++	req = BIT(pd->cntr_pdn_bit + 1);
++	regmap_update_bits(pd->regmap, GPC_CNTR, req, req);
+ 
+-	/* Wait ISO + ISO2SW IPG clock cycles */
+-	udelay(DIV_ROUND_UP(sw + sw2iso, pd->ipg_rate_mhz));
++	/* Wait for the PGC to handle the request */
++	ret = regmap_read_poll_timeout(pd->regmap, GPC_CNTR, val, !(val & req),
++				       1, 50);
++	if (ret)
++		pr_err("powerup request on domain %s timed out\n", genpd->name);
++
++	/* Wait for reset to propagate through peripherals */
++	usleep_range(5, 10);
+ 
+ 	/* Disable reset clocks for all devices in the domain */
+ 	for (i = 0; i < pd->num_clks; i++)
+@@ -343,6 +344,7 @@ static const struct regmap_config imx_gpc_regmap_config = {
+ 	.rd_table = &access_table,
+ 	.wr_table = &access_table,
+ 	.max_register = 0x2ac,
++	.fast_io = true,
+ };
+ 
+ static struct generic_pm_domain *imx_gpc_onecell_domains[] = {
+diff --git a/drivers/thermal/Kconfig b/drivers/thermal/Kconfig
+index 5a05db5438d6..5a0df0e54ce3 100644
+--- a/drivers/thermal/Kconfig
++++ b/drivers/thermal/Kconfig
+@@ -265,6 +265,7 @@ config QORIQ_THERMAL
+ 	tristate "QorIQ Thermal Monitoring Unit"
+ 	depends on THERMAL_OF
+ 	depends on HAS_IOMEM
++	select REGMAP_MMIO
+ 	help
+ 	  Support for Thermal Monitoring Unit (TMU) found on QorIQ platforms.
+ 	  It supports one critical trip point and one passive trip point. The
+diff --git a/drivers/thermal/cpufreq_cooling.c b/drivers/thermal/cpufreq_cooling.c
+index fe83d7a210d4..af55ac08e1bd 100644
+--- a/drivers/thermal/cpufreq_cooling.c
++++ b/drivers/thermal/cpufreq_cooling.c
+@@ -431,6 +431,7 @@ static int cpufreq_set_cur_state(struct thermal_cooling_device *cdev,
+ 				 unsigned long state)
+ {
+ 	struct cpufreq_cooling_device *cpufreq_cdev = cdev->devdata;
++	int ret;
+ 
+ 	/* Request state should be less than max_level */
+ 	if (WARN_ON(state > cpufreq_cdev->max_level))
+@@ -442,8 +443,9 @@ static int cpufreq_set_cur_state(struct thermal_cooling_device *cdev,
+ 
+ 	cpufreq_cdev->cpufreq_state = state;
+ 
+-	return freq_qos_update_request(&cpufreq_cdev->qos_req,
+-				get_state_freq(cpufreq_cdev, state));
++	ret = freq_qos_update_request(&cpufreq_cdev->qos_req,
++				      get_state_freq(cpufreq_cdev, state));
++	return ret < 0 ? ret : 0;
+ }
+ 
+ /* Bind cpufreq callbacks to thermal cooling device ops */
+diff --git a/drivers/thermal/qcom/tsens-common.c b/drivers/thermal/qcom/tsens-common.c
+index c8d57ee0a5bb..2cc276cdfcdb 100644
+--- a/drivers/thermal/qcom/tsens-common.c
++++ b/drivers/thermal/qcom/tsens-common.c
+@@ -602,7 +602,7 @@ int __init init_common(struct tsens_priv *priv)
+ 		/* DT with separate SROT and TM address space */
+ 		priv->tm_offset = 0;
+ 		res = platform_get_resource(op, IORESOURCE_MEM, 1);
+-		srot_base = devm_ioremap_resource(&op->dev, res);
++		srot_base = devm_ioremap_resource(dev, res);
+ 		if (IS_ERR(srot_base)) {
+ 			ret = PTR_ERR(srot_base);
+ 			goto err_put_device;
+@@ -620,7 +620,7 @@ int __init init_common(struct tsens_priv *priv)
+ 	}
+ 
+ 	res = platform_get_resource(op, IORESOURCE_MEM, 0);
+-	tm_base = devm_ioremap_resource(&op->dev, res);
++	tm_base = devm_ioremap_resource(dev, res);
+ 	if (IS_ERR(tm_base)) {
+ 		ret = PTR_ERR(tm_base);
+ 		goto err_put_device;
+@@ -687,8 +687,6 @@ int __init init_common(struct tsens_priv *priv)
+ 	tsens_enable_irq(priv);
+ 	tsens_debug_init(op);
+ 
+-	return 0;
+-
+ err_put_device:
+ 	put_device(&op->dev);
+ 	return ret;
+diff --git a/drivers/tty/ehv_bytechan.c b/drivers/tty/ehv_bytechan.c
+index 769e0a5d1dfc..3c6dd06ec5fb 100644
+--- a/drivers/tty/ehv_bytechan.c
++++ b/drivers/tty/ehv_bytechan.c
+@@ -136,6 +136,21 @@ static int find_console_handle(void)
+ 	return 1;
+ }
+ 
++static unsigned int local_ev_byte_channel_send(unsigned int handle,
++					       unsigned int *count,
++					       const char *p)
++{
++	char buffer[EV_BYTE_CHANNEL_MAX_BYTES];
++	unsigned int c = *count;
++
++	if (c < sizeof(buffer)) {
++		memcpy(buffer, p, c);
++		memset(&buffer[c], 0, sizeof(buffer) - c);
++		p = buffer;
++	}
++	return ev_byte_channel_send(handle, count, p);
++}
++
+ /*************************** EARLY CONSOLE DRIVER ***************************/
+ 
+ #ifdef CONFIG_PPC_EARLY_DEBUG_EHV_BC
+@@ -154,7 +169,7 @@ static void byte_channel_spin_send(const char data)
+ 
+ 	do {
+ 		count = 1;
+-		ret = ev_byte_channel_send(CONFIG_PPC_EARLY_DEBUG_EHV_BC_HANDLE,
++		ret = local_ev_byte_channel_send(CONFIG_PPC_EARLY_DEBUG_EHV_BC_HANDLE,
+ 					   &count, &data);
+ 	} while (ret == EV_EAGAIN);
+ }
+@@ -221,7 +236,7 @@ static int ehv_bc_console_byte_channel_send(unsigned int handle, const char *s,
+ 	while (count) {
+ 		len = min_t(unsigned int, count, EV_BYTE_CHANNEL_MAX_BYTES);
+ 		do {
+-			ret = ev_byte_channel_send(handle, &len, s);
++			ret = local_ev_byte_channel_send(handle, &len, s);
+ 		} while (ret == EV_EAGAIN);
+ 		count -= len;
+ 		s += len;
+@@ -401,7 +416,7 @@ static void ehv_bc_tx_dequeue(struct ehv_bc_data *bc)
+ 			    CIRC_CNT_TO_END(bc->head, bc->tail, BUF_SIZE),
+ 			    EV_BYTE_CHANNEL_MAX_BYTES);
+ 
+-		ret = ev_byte_channel_send(bc->handle, &len, bc->buf + bc->tail);
++		ret = local_ev_byte_channel_send(bc->handle, &len, bc->buf + bc->tail);
+ 
+ 		/* 'len' is valid only if the return code is 0 or EV_EAGAIN */
+ 		if (!ret || (ret == EV_EAGAIN))
+diff --git a/drivers/video/fbdev/core/fbmem.c b/drivers/video/fbdev/core/fbmem.c
+index d04554959ea7..30e73ec4ad5c 100644
+--- a/drivers/video/fbdev/core/fbmem.c
++++ b/drivers/video/fbdev/core/fbmem.c
+@@ -663,20 +663,20 @@ int fb_prepare_logo(struct fb_info *info, int rotate)
+ 		fb_logo.depth = 1;
+ 
+ 
+- 	if (fb_logo.depth > 4 && depth > 4) {
+- 		switch (info->fix.visual) {
+- 		case FB_VISUAL_TRUECOLOR:
+- 			fb_logo.needs_truepalette = 1;
+- 			break;
+- 		case FB_VISUAL_DIRECTCOLOR:
+- 			fb_logo.needs_directpalette = 1;
+- 			fb_logo.needs_cmapreset = 1;
+- 			break;
+- 		case FB_VISUAL_PSEUDOCOLOR:
+- 			fb_logo.needs_cmapreset = 1;
+- 			break;
+- 		}
+- 	}
++	if (fb_logo.depth > 4 && depth > 4) {
++		switch (info->fix.visual) {
++		case FB_VISUAL_TRUECOLOR:
++			fb_logo.needs_truepalette = 1;
++			break;
++		case FB_VISUAL_DIRECTCOLOR:
++			fb_logo.needs_directpalette = 1;
++			fb_logo.needs_cmapreset = 1;
++			break;
++		case FB_VISUAL_PSEUDOCOLOR:
++			fb_logo.needs_cmapreset = 1;
++			break;
++		}
++	}
+ 
+ 	height = fb_logo.logo->height;
+ 	if (fb_center_logo)
+@@ -1065,19 +1065,19 @@ fb_blank(struct fb_info *info, int blank)
+ 	struct fb_event event;
+ 	int ret = -EINVAL;
+ 
+- 	if (blank > FB_BLANK_POWERDOWN)
+- 		blank = FB_BLANK_POWERDOWN;
++	if (blank > FB_BLANK_POWERDOWN)
++		blank = FB_BLANK_POWERDOWN;
+ 
+ 	event.info = info;
+ 	event.data = &blank;
+ 
+ 	if (info->fbops->fb_blank)
+- 		ret = info->fbops->fb_blank(blank, info);
++		ret = info->fbops->fb_blank(blank, info);
+ 
+ 	if (!ret)
+ 		fb_notifier_call_chain(FB_EVENT_BLANK, &event);
+ 
+- 	return ret;
++	return ret;
+ }
+ EXPORT_SYMBOL(fb_blank);
+ 
+@@ -1115,7 +1115,7 @@ static long do_fb_ioctl(struct fb_info *info, unsigned int cmd,
+ 		break;
+ 	case FBIOGET_FSCREENINFO:
+ 		lock_fb_info(info);
+-		fix = info->fix;
++		memcpy(&fix, &info->fix, sizeof(fix));
+ 		if (info->flags & FBINFO_HIDE_SMEM_START)
+ 			fix.smem_start = 0;
+ 		unlock_fb_info(info);
+diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
+index 341458fd95ca..44375a22307b 100644
+--- a/drivers/virtio/virtio_balloon.c
++++ b/drivers/virtio/virtio_balloon.c
+@@ -14,6 +14,7 @@
+ #include <linux/slab.h>
+ #include <linux/module.h>
+ #include <linux/balloon_compaction.h>
++#include <linux/oom.h>
+ #include <linux/wait.h>
+ #include <linux/mm.h>
+ #include <linux/mount.h>
+@@ -27,7 +28,9 @@
+  */
+ #define VIRTIO_BALLOON_PAGES_PER_PAGE (unsigned)(PAGE_SIZE >> VIRTIO_BALLOON_PFN_SHIFT)
+ #define VIRTIO_BALLOON_ARRAY_PFNS_MAX 256
+-#define VIRTBALLOON_OOM_NOTIFY_PRIORITY 80
++/* Maximum number of (4k) pages to deflate on OOM notifications. */
++#define VIRTIO_BALLOON_OOM_NR_PAGES 256
++#define VIRTIO_BALLOON_OOM_NOTIFY_PRIORITY 80
+ 
+ #define VIRTIO_BALLOON_FREE_PAGE_ALLOC_FLAG (__GFP_NORETRY | __GFP_NOWARN | \
+ 					     __GFP_NOMEMALLOC)
+@@ -112,8 +115,11 @@ struct virtio_balloon {
+ 	/* Memory statistics */
+ 	struct virtio_balloon_stat stats[VIRTIO_BALLOON_S_NR];
+ 
+-	/* To register a shrinker to shrink memory upon memory pressure */
++	/* Shrinker to return free pages - VIRTIO_BALLOON_F_FREE_PAGE_HINT */
+ 	struct shrinker shrinker;
++
++	/* OOM notifier to deflate on OOM - VIRTIO_BALLOON_F_DEFLATE_ON_OOM */
++	struct notifier_block oom_nb;
+ };
+ 
+ static struct virtio_device_id id_table[] = {
+@@ -788,50 +794,13 @@ static unsigned long shrink_free_pages(struct virtio_balloon *vb,
+ 	return blocks_freed * VIRTIO_BALLOON_HINT_BLOCK_PAGES;
+ }
+ 
+-static unsigned long leak_balloon_pages(struct virtio_balloon *vb,
+-                                          unsigned long pages_to_free)
+-{
+-	return leak_balloon(vb, pages_to_free * VIRTIO_BALLOON_PAGES_PER_PAGE) /
+-		VIRTIO_BALLOON_PAGES_PER_PAGE;
+-}
+-
+-static unsigned long shrink_balloon_pages(struct virtio_balloon *vb,
+-					  unsigned long pages_to_free)
+-{
+-	unsigned long pages_freed = 0;
+-
+-	/*
+-	 * One invocation of leak_balloon can deflate at most
+-	 * VIRTIO_BALLOON_ARRAY_PFNS_MAX balloon pages, so we call it
+-	 * multiple times to deflate pages till reaching pages_to_free.
+-	 */
+-	while (vb->num_pages && pages_freed < pages_to_free)
+-		pages_freed += leak_balloon_pages(vb,
+-						  pages_to_free - pages_freed);
+-
+-	update_balloon_size(vb);
+-
+-	return pages_freed;
+-}
+-
+ static unsigned long virtio_balloon_shrinker_scan(struct shrinker *shrinker,
+ 						  struct shrink_control *sc)
+ {
+-	unsigned long pages_to_free, pages_freed = 0;
+ 	struct virtio_balloon *vb = container_of(shrinker,
+ 					struct virtio_balloon, shrinker);
+ 
+-	pages_to_free = sc->nr_to_scan;
+-
+-	if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT))
+-		pages_freed = shrink_free_pages(vb, pages_to_free);
+-
+-	if (pages_freed >= pages_to_free)
+-		return pages_freed;
+-
+-	pages_freed += shrink_balloon_pages(vb, pages_to_free - pages_freed);
+-
+-	return pages_freed;
++	return shrink_free_pages(vb, sc->nr_to_scan);
+ }
+ 
+ static unsigned long virtio_balloon_shrinker_count(struct shrinker *shrinker,
+@@ -839,26 +808,22 @@ static unsigned long virtio_balloon_shrinker_count(struct shrinker *shrinker,
+ {
+ 	struct virtio_balloon *vb = container_of(shrinker,
+ 					struct virtio_balloon, shrinker);
+-	unsigned long count;
+-
+-	count = vb->num_pages / VIRTIO_BALLOON_PAGES_PER_PAGE;
+-	count += vb->num_free_page_blocks * VIRTIO_BALLOON_HINT_BLOCK_PAGES;
+ 
+-	return count;
++	return vb->num_free_page_blocks * VIRTIO_BALLOON_HINT_BLOCK_PAGES;
+ }
+ 
+-static void virtio_balloon_unregister_shrinker(struct virtio_balloon *vb)
++static int virtio_balloon_oom_notify(struct notifier_block *nb,
++				     unsigned long dummy, void *parm)
+ {
+-	unregister_shrinker(&vb->shrinker);
+-}
++	struct virtio_balloon *vb = container_of(nb,
++						 struct virtio_balloon, oom_nb);
++	unsigned long *freed = parm;
+ 
+-static int virtio_balloon_register_shrinker(struct virtio_balloon *vb)
+-{
+-	vb->shrinker.scan_objects = virtio_balloon_shrinker_scan;
+-	vb->shrinker.count_objects = virtio_balloon_shrinker_count;
+-	vb->shrinker.seeks = DEFAULT_SEEKS;
++	*freed += leak_balloon(vb, VIRTIO_BALLOON_OOM_NR_PAGES) /
++		  VIRTIO_BALLOON_PAGES_PER_PAGE;
++	update_balloon_size(vb);
+ 
+-	return register_shrinker(&vb->shrinker);
++	return NOTIFY_OK;
+ }
+ 
+ static int virtballoon_probe(struct virtio_device *vdev)
+@@ -935,22 +900,35 @@ static int virtballoon_probe(struct virtio_device *vdev)
+ 			virtio_cwrite(vb->vdev, struct virtio_balloon_config,
+ 				      poison_val, &poison_val);
+ 		}
+-	}
+-	/*
+-	 * We continue to use VIRTIO_BALLOON_F_DEFLATE_ON_OOM to decide if a
+-	 * shrinker needs to be registered to relieve memory pressure.
+-	 */
+-	if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_DEFLATE_ON_OOM)) {
+-		err = virtio_balloon_register_shrinker(vb);
++
++		/*
++		 * We're allowed to reuse any free pages, even if they are
++		 * still to be processed by the host.
++		 */
++		vb->shrinker.scan_objects = virtio_balloon_shrinker_scan;
++		vb->shrinker.count_objects = virtio_balloon_shrinker_count;
++		vb->shrinker.seeks = DEFAULT_SEEKS;
++		err = register_shrinker(&vb->shrinker);
+ 		if (err)
+ 			goto out_del_balloon_wq;
+ 	}
++	if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_DEFLATE_ON_OOM)) {
++		vb->oom_nb.notifier_call = virtio_balloon_oom_notify;
++		vb->oom_nb.priority = VIRTIO_BALLOON_OOM_NOTIFY_PRIORITY;
++		err = register_oom_notifier(&vb->oom_nb);
++		if (err < 0)
++			goto out_unregister_shrinker;
++	}
++
+ 	virtio_device_ready(vdev);
+ 
+ 	if (towards_target(vb))
+ 		virtballoon_changed(vdev);
+ 	return 0;
+ 
++out_unregister_shrinker:
++	if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT))
++		unregister_shrinker(&vb->shrinker);
+ out_del_balloon_wq:
+ 	if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT))
+ 		destroy_workqueue(vb->balloon_wq);
+@@ -989,8 +967,11 @@ static void virtballoon_remove(struct virtio_device *vdev)
+ {
+ 	struct virtio_balloon *vb = vdev->priv;
+ 
+-	if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_DEFLATE_ON_OOM))
+-		virtio_balloon_unregister_shrinker(vb);
++	if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_DEFLATE_ON_OOM))
++		unregister_oom_notifier(&vb->oom_nb);
++	if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT))
++		unregister_shrinker(&vb->shrinker);
++
+ 	spin_lock_irq(&vb->stop_update_lock);
+ 	vb->stop_update = true;
+ 	spin_unlock_irq(&vb->stop_update_lock);
+diff --git a/drivers/watchdog/sp805_wdt.c b/drivers/watchdog/sp805_wdt.c
+index 53e04926a7b2..190d26e2e75f 100644
+--- a/drivers/watchdog/sp805_wdt.c
++++ b/drivers/watchdog/sp805_wdt.c
+@@ -137,10 +137,14 @@ wdt_restart(struct watchdog_device *wdd, unsigned long mode, void *cmd)
+ {
+ 	struct sp805_wdt *wdt = watchdog_get_drvdata(wdd);
+ 
++	writel_relaxed(UNLOCK, wdt->base + WDTLOCK);
+ 	writel_relaxed(0, wdt->base + WDTCONTROL);
+ 	writel_relaxed(0, wdt->base + WDTLOAD);
+ 	writel_relaxed(INT_ENABLE | RESET_ENABLE, wdt->base + WDTCONTROL);
+ 
++	/* Flush posted writes. */
++	readl_relaxed(wdt->base + WDTLOCK);
++
+ 	return 0;
+ }
+ 
+diff --git a/fs/afs/dir.c b/fs/afs/dir.c
+index 5c794f4b051a..d1e1caa23c8b 100644
+--- a/fs/afs/dir.c
++++ b/fs/afs/dir.c
+@@ -1032,7 +1032,7 @@ static int afs_d_revalidate(struct dentry *dentry, unsigned int flags)
+ 	struct dentry *parent;
+ 	struct inode *inode;
+ 	struct key *key;
+-	afs_dataversion_t dir_version;
++	afs_dataversion_t dir_version, invalid_before;
+ 	long de_version;
+ 	int ret;
+ 
+@@ -1084,8 +1084,8 @@ static int afs_d_revalidate(struct dentry *dentry, unsigned int flags)
+ 	if (de_version == (long)dir_version)
+ 		goto out_valid_noupdate;
+ 
+-	dir_version = dir->invalid_before;
+-	if (de_version - (long)dir_version >= 0)
++	invalid_before = dir->invalid_before;
++	if (de_version - (long)invalid_before >= 0)
+ 		goto out_valid;
+ 
+ 	_debug("dir modified");
+@@ -1275,6 +1275,7 @@ static int afs_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode)
+ 	struct afs_fs_cursor fc;
+ 	struct afs_vnode *dvnode = AFS_FS_I(dir);
+ 	struct key *key;
++	afs_dataversion_t data_version;
+ 	int ret;
+ 
+ 	mode |= S_IFDIR;
+@@ -1295,7 +1296,7 @@ static int afs_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode)
+ 
+ 	ret = -ERESTARTSYS;
+ 	if (afs_begin_vnode_operation(&fc, dvnode, key, true)) {
+-		afs_dataversion_t data_version = dvnode->status.data_version + 1;
++		data_version = dvnode->status.data_version + 1;
+ 
+ 		while (afs_select_fileserver(&fc)) {
+ 			fc.cb_break = afs_calc_vnode_cb_break(dvnode);
+@@ -1316,10 +1317,14 @@ static int afs_mkdir(struct inode *dir, struct dentry *dentry, umode_t mode)
+ 		goto error_key;
+ 	}
+ 
+-	if (ret == 0 &&
+-	    test_bit(AFS_VNODE_DIR_VALID, &dvnode->flags))
+-		afs_edit_dir_add(dvnode, &dentry->d_name, &iget_data.fid,
+-				 afs_edit_dir_for_create);
++	if (ret == 0) {
++		down_write(&dvnode->validate_lock);
++		if (test_bit(AFS_VNODE_DIR_VALID, &dvnode->flags) &&
++		    dvnode->status.data_version == data_version)
++			afs_edit_dir_add(dvnode, &dentry->d_name, &iget_data.fid,
++					 afs_edit_dir_for_create);
++		up_write(&dvnode->validate_lock);
++	}
+ 
+ 	key_put(key);
+ 	kfree(scb);
+@@ -1360,6 +1365,7 @@ static int afs_rmdir(struct inode *dir, struct dentry *dentry)
+ 	struct afs_fs_cursor fc;
+ 	struct afs_vnode *dvnode = AFS_FS_I(dir), *vnode = NULL;
+ 	struct key *key;
++	afs_dataversion_t data_version;
+ 	int ret;
+ 
+ 	_enter("{%llx:%llu},{%pd}",
+@@ -1391,7 +1397,7 @@ static int afs_rmdir(struct inode *dir, struct dentry *dentry)
+ 
+ 	ret = -ERESTARTSYS;
+ 	if (afs_begin_vnode_operation(&fc, dvnode, key, true)) {
+-		afs_dataversion_t data_version = dvnode->status.data_version + 1;
++		data_version = dvnode->status.data_version + 1;
+ 
+ 		while (afs_select_fileserver(&fc)) {
+ 			fc.cb_break = afs_calc_vnode_cb_break(dvnode);
+@@ -1404,9 +1410,12 @@ static int afs_rmdir(struct inode *dir, struct dentry *dentry)
+ 		ret = afs_end_vnode_operation(&fc);
+ 		if (ret == 0) {
+ 			afs_dir_remove_subdir(dentry);
+-			if (test_bit(AFS_VNODE_DIR_VALID, &dvnode->flags))
++			down_write(&dvnode->validate_lock);
++			if (test_bit(AFS_VNODE_DIR_VALID, &dvnode->flags) &&
++			    dvnode->status.data_version == data_version)
+ 				afs_edit_dir_remove(dvnode, &dentry->d_name,
+ 						    afs_edit_dir_for_rmdir);
++			up_write(&dvnode->validate_lock);
+ 		}
+ 	}
+ 
+@@ -1544,10 +1553,15 @@ static int afs_unlink(struct inode *dir, struct dentry *dentry)
+ 		ret = afs_end_vnode_operation(&fc);
+ 		if (ret == 0 && !(scb[1].have_status || scb[1].have_error))
+ 			ret = afs_dir_remove_link(dvnode, dentry, key);
+-		if (ret == 0 &&
+-		    test_bit(AFS_VNODE_DIR_VALID, &dvnode->flags))
+-			afs_edit_dir_remove(dvnode, &dentry->d_name,
+-					    afs_edit_dir_for_unlink);
++
++		if (ret == 0) {
++			down_write(&dvnode->validate_lock);
++			if (test_bit(AFS_VNODE_DIR_VALID, &dvnode->flags) &&
++			    dvnode->status.data_version == data_version)
++				afs_edit_dir_remove(dvnode, &dentry->d_name,
++						    afs_edit_dir_for_unlink);
++			up_write(&dvnode->validate_lock);
++		}
+ 	}
+ 
+ 	if (need_rehash && ret < 0 && ret != -ENOENT)
+@@ -1573,6 +1587,7 @@ static int afs_create(struct inode *dir, struct dentry *dentry, umode_t mode,
+ 	struct afs_status_cb *scb;
+ 	struct afs_vnode *dvnode = AFS_FS_I(dir);
+ 	struct key *key;
++	afs_dataversion_t data_version;
+ 	int ret;
+ 
+ 	mode |= S_IFREG;
+@@ -1597,7 +1612,7 @@ static int afs_create(struct inode *dir, struct dentry *dentry, umode_t mode,
+ 
+ 	ret = -ERESTARTSYS;
+ 	if (afs_begin_vnode_operation(&fc, dvnode, key, true)) {
+-		afs_dataversion_t data_version = dvnode->status.data_version + 1;
++		data_version = dvnode->status.data_version + 1;
+ 
+ 		while (afs_select_fileserver(&fc)) {
+ 			fc.cb_break = afs_calc_vnode_cb_break(dvnode);
+@@ -1618,9 +1633,12 @@ static int afs_create(struct inode *dir, struct dentry *dentry, umode_t mode,
+ 		goto error_key;
+ 	}
+ 
+-	if (test_bit(AFS_VNODE_DIR_VALID, &dvnode->flags))
++	down_write(&dvnode->validate_lock);
++	if (test_bit(AFS_VNODE_DIR_VALID, &dvnode->flags) &&
++	    dvnode->status.data_version == data_version)
+ 		afs_edit_dir_add(dvnode, &dentry->d_name, &iget_data.fid,
+ 				 afs_edit_dir_for_create);
++	up_write(&dvnode->validate_lock);
+ 
+ 	kfree(scb);
+ 	key_put(key);
+@@ -1648,6 +1666,7 @@ static int afs_link(struct dentry *from, struct inode *dir,
+ 	struct afs_vnode *dvnode = AFS_FS_I(dir);
+ 	struct afs_vnode *vnode = AFS_FS_I(d_inode(from));
+ 	struct key *key;
++	afs_dataversion_t data_version;
+ 	int ret;
+ 
+ 	_enter("{%llx:%llu},{%llx:%llu},{%pd}",
+@@ -1672,7 +1691,7 @@ static int afs_link(struct dentry *from, struct inode *dir,
+ 
+ 	ret = -ERESTARTSYS;
+ 	if (afs_begin_vnode_operation(&fc, dvnode, key, true)) {
+-		afs_dataversion_t data_version = dvnode->status.data_version + 1;
++		data_version = dvnode->status.data_version + 1;
+ 
+ 		if (mutex_lock_interruptible_nested(&vnode->io_lock, 1) < 0) {
+ 			afs_end_vnode_operation(&fc);
+@@ -1702,9 +1721,12 @@ static int afs_link(struct dentry *from, struct inode *dir,
+ 		goto error_key;
+ 	}
+ 
+-	if (test_bit(AFS_VNODE_DIR_VALID, &dvnode->flags))
++	down_write(&dvnode->validate_lock);
++	if (test_bit(AFS_VNODE_DIR_VALID, &dvnode->flags) &&
++	    dvnode->status.data_version == data_version)
+ 		afs_edit_dir_add(dvnode, &dentry->d_name, &vnode->fid,
+ 				 afs_edit_dir_for_link);
++	up_write(&dvnode->validate_lock);
+ 
+ 	key_put(key);
+ 	kfree(scb);
+@@ -1732,6 +1754,7 @@ static int afs_symlink(struct inode *dir, struct dentry *dentry,
+ 	struct afs_status_cb *scb;
+ 	struct afs_vnode *dvnode = AFS_FS_I(dir);
+ 	struct key *key;
++	afs_dataversion_t data_version;
+ 	int ret;
+ 
+ 	_enter("{%llx:%llu},{%pd},%s",
+@@ -1759,7 +1782,7 @@ static int afs_symlink(struct inode *dir, struct dentry *dentry,
+ 
+ 	ret = -ERESTARTSYS;
+ 	if (afs_begin_vnode_operation(&fc, dvnode, key, true)) {
+-		afs_dataversion_t data_version = dvnode->status.data_version + 1;
++		data_version = dvnode->status.data_version + 1;
+ 
+ 		while (afs_select_fileserver(&fc)) {
+ 			fc.cb_break = afs_calc_vnode_cb_break(dvnode);
+@@ -1780,9 +1803,12 @@ static int afs_symlink(struct inode *dir, struct dentry *dentry,
+ 		goto error_key;
+ 	}
+ 
+-	if (test_bit(AFS_VNODE_DIR_VALID, &dvnode->flags))
++	down_write(&dvnode->validate_lock);
++	if (test_bit(AFS_VNODE_DIR_VALID, &dvnode->flags) &&
++	    dvnode->status.data_version == data_version)
+ 		afs_edit_dir_add(dvnode, &dentry->d_name, &iget_data.fid,
+ 				 afs_edit_dir_for_symlink);
++	up_write(&dvnode->validate_lock);
+ 
+ 	key_put(key);
+ 	kfree(scb);
+@@ -1812,6 +1838,8 @@ static int afs_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 	struct dentry *tmp = NULL, *rehash = NULL;
+ 	struct inode *new_inode;
+ 	struct key *key;
++	afs_dataversion_t orig_data_version;
++	afs_dataversion_t new_data_version;
+ 	bool new_negative = d_is_negative(new_dentry);
+ 	int ret;
+ 
+@@ -1890,10 +1918,6 @@ static int afs_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 
+ 	ret = -ERESTARTSYS;
+ 	if (afs_begin_vnode_operation(&fc, orig_dvnode, key, true)) {
+-		afs_dataversion_t orig_data_version;
+-		afs_dataversion_t new_data_version;
+-		struct afs_status_cb *new_scb = &scb[1];
+-
+ 		orig_data_version = orig_dvnode->status.data_version + 1;
+ 
+ 		if (orig_dvnode != new_dvnode) {
+@@ -1904,7 +1928,6 @@ static int afs_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 			new_data_version = new_dvnode->status.data_version + 1;
+ 		} else {
+ 			new_data_version = orig_data_version;
+-			new_scb = &scb[0];
+ 		}
+ 
+ 		while (afs_select_fileserver(&fc)) {
+@@ -1912,7 +1935,7 @@ static int afs_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 			fc.cb_break_2 = afs_calc_vnode_cb_break(new_dvnode);
+ 			afs_fs_rename(&fc, old_dentry->d_name.name,
+ 				      new_dvnode, new_dentry->d_name.name,
+-				      &scb[0], new_scb);
++				      &scb[0], &scb[1]);
+ 		}
+ 
+ 		afs_vnode_commit_status(&fc, orig_dvnode, fc.cb_break,
+@@ -1930,18 +1953,25 @@ static int afs_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 	if (ret == 0) {
+ 		if (rehash)
+ 			d_rehash(rehash);
+-		if (test_bit(AFS_VNODE_DIR_VALID, &orig_dvnode->flags))
+-		    afs_edit_dir_remove(orig_dvnode, &old_dentry->d_name,
+-					afs_edit_dir_for_rename_0);
++		down_write(&orig_dvnode->validate_lock);
++		if (test_bit(AFS_VNODE_DIR_VALID, &orig_dvnode->flags) &&
++		    orig_dvnode->status.data_version == orig_data_version)
++			afs_edit_dir_remove(orig_dvnode, &old_dentry->d_name,
++					    afs_edit_dir_for_rename_0);
++		if (orig_dvnode != new_dvnode) {
++			up_write(&orig_dvnode->validate_lock);
+ 
+-		if (!new_negative &&
+-		    test_bit(AFS_VNODE_DIR_VALID, &new_dvnode->flags))
+-			afs_edit_dir_remove(new_dvnode, &new_dentry->d_name,
+-					    afs_edit_dir_for_rename_1);
++			down_write(&new_dvnode->validate_lock);
++		}
++		if (test_bit(AFS_VNODE_DIR_VALID, &new_dvnode->flags) &&
++		    orig_dvnode->status.data_version == new_data_version) {
++			if (!new_negative)
++				afs_edit_dir_remove(new_dvnode, &new_dentry->d_name,
++						    afs_edit_dir_for_rename_1);
+ 
+-		if (test_bit(AFS_VNODE_DIR_VALID, &new_dvnode->flags))
+ 			afs_edit_dir_add(new_dvnode, &new_dentry->d_name,
+ 					 &vnode->fid, afs_edit_dir_for_rename_2);
++		}
+ 
+ 		new_inode = d_inode(new_dentry);
+ 		if (new_inode) {
+@@ -1957,14 +1987,10 @@ static int afs_rename(struct inode *old_dir, struct dentry *old_dentry,
+ 		 * Note that if we ever implement RENAME_EXCHANGE, we'll have
+ 		 * to update both dentries with opposing dir versions.
+ 		 */
+-		if (new_dvnode != orig_dvnode) {
+-			afs_update_dentry_version(&fc, old_dentry, &scb[1]);
+-			afs_update_dentry_version(&fc, new_dentry, &scb[1]);
+-		} else {
+-			afs_update_dentry_version(&fc, old_dentry, &scb[0]);
+-			afs_update_dentry_version(&fc, new_dentry, &scb[0]);
+-		}
++		afs_update_dentry_version(&fc, old_dentry, &scb[1]);
++		afs_update_dentry_version(&fc, new_dentry, &scb[1]);
+ 		d_move(old_dentry, new_dentry);
++		up_write(&new_dvnode->validate_lock);
+ 		goto error_tmp;
+ 	}
+ 
+diff --git a/fs/afs/dir_silly.c b/fs/afs/dir_silly.c
+index 361088a5edb9..d94e2b7cddff 100644
+--- a/fs/afs/dir_silly.c
++++ b/fs/afs/dir_silly.c
+@@ -21,6 +21,7 @@ static int afs_do_silly_rename(struct afs_vnode *dvnode, struct afs_vnode *vnode
+ {
+ 	struct afs_fs_cursor fc;
+ 	struct afs_status_cb *scb;
++	afs_dataversion_t dir_data_version;
+ 	int ret = -ERESTARTSYS;
+ 
+ 	_enter("%pd,%pd", old, new);
+@@ -31,7 +32,7 @@ static int afs_do_silly_rename(struct afs_vnode *dvnode, struct afs_vnode *vnode
+ 
+ 	trace_afs_silly_rename(vnode, false);
+ 	if (afs_begin_vnode_operation(&fc, dvnode, key, true)) {
+-		afs_dataversion_t dir_data_version = dvnode->status.data_version + 1;
++		dir_data_version = dvnode->status.data_version + 1;
+ 
+ 		while (afs_select_fileserver(&fc)) {
+ 			fc.cb_break = afs_calc_vnode_cb_break(dvnode);
+@@ -54,12 +55,15 @@ static int afs_do_silly_rename(struct afs_vnode *dvnode, struct afs_vnode *vnode
+ 			dvnode->silly_key = key_get(key);
+ 		}
+ 
+-		if (test_bit(AFS_VNODE_DIR_VALID, &dvnode->flags))
++		down_write(&dvnode->validate_lock);
++		if (test_bit(AFS_VNODE_DIR_VALID, &dvnode->flags) &&
++		    dvnode->status.data_version == dir_data_version) {
+ 			afs_edit_dir_remove(dvnode, &old->d_name,
+ 					    afs_edit_dir_for_silly_0);
+-		if (test_bit(AFS_VNODE_DIR_VALID, &dvnode->flags))
+ 			afs_edit_dir_add(dvnode, &new->d_name,
+ 					 &vnode->fid, afs_edit_dir_for_silly_1);
++		}
++		up_write(&dvnode->validate_lock);
+ 	}
+ 
+ 	kfree(scb);
+@@ -181,10 +185,14 @@ static int afs_do_silly_unlink(struct afs_vnode *dvnode, struct afs_vnode *vnode
+ 				clear_bit(AFS_VNODE_CB_PROMISED, &vnode->flags);
+ 			}
+ 		}
+-		if (ret == 0 &&
+-		    test_bit(AFS_VNODE_DIR_VALID, &dvnode->flags))
+-			afs_edit_dir_remove(dvnode, &dentry->d_name,
+-					    afs_edit_dir_for_unlink);
++		if (ret == 0) {
++			down_write(&dvnode->validate_lock);
++			if (test_bit(AFS_VNODE_DIR_VALID, &dvnode->flags) &&
++			    dvnode->status.data_version == dir_data_version)
++				afs_edit_dir_remove(dvnode, &dentry->d_name,
++						    afs_edit_dir_for_unlink);
++			up_write(&dvnode->validate_lock);
++		}
+ 	}
+ 
+ 	kfree(scb);
+diff --git a/fs/afs/fsclient.c b/fs/afs/fsclient.c
+index 1f9c5d8e6fe5..68fc46634346 100644
+--- a/fs/afs/fsclient.c
++++ b/fs/afs/fsclient.c
+@@ -65,6 +65,7 @@ static int xdr_decode_AFSFetchStatus(const __be32 **_bp,
+ 	bool inline_error = (call->operation_ID == afs_FS_InlineBulkStatus);
+ 	u64 data_version, size;
+ 	u32 type, abort_code;
++	int ret;
+ 
+ 	abort_code = ntohl(xdr->abort_code);
+ 
+@@ -78,7 +79,7 @@ static int xdr_decode_AFSFetchStatus(const __be32 **_bp,
+ 			 */
+ 			status->abort_code = abort_code;
+ 			scb->have_error = true;
+-			return 0;
++			goto good;
+ 		}
+ 
+ 		pr_warn("Unknown AFSFetchStatus version %u\n", ntohl(xdr->if_version));
+@@ -87,7 +88,8 @@ static int xdr_decode_AFSFetchStatus(const __be32 **_bp,
+ 
+ 	if (abort_code != 0 && inline_error) {
+ 		status->abort_code = abort_code;
+-		return 0;
++		scb->have_error = true;
++		goto good;
+ 	}
+ 
+ 	type = ntohl(xdr->type);
+@@ -123,13 +125,16 @@ static int xdr_decode_AFSFetchStatus(const __be32 **_bp,
+ 	data_version |= (u64)ntohl(xdr->data_version_hi) << 32;
+ 	status->data_version = data_version;
+ 	scb->have_status = true;
+-
++good:
++	ret = 0;
++advance:
+ 	*_bp = (const void *)*_bp + sizeof(*xdr);
+-	return 0;
++	return ret;
+ 
+ bad:
+ 	xdr_dump_bad(*_bp);
+-	return afs_protocol_error(call, -EBADMSG, afs_eproto_bad_status);
++	ret = afs_protocol_error(call, -EBADMSG, afs_eproto_bad_status);
++	goto advance;
+ }
+ 
+ static time64_t xdr_decode_expiry(struct afs_call *call, u32 expiry)
+@@ -981,16 +986,16 @@ static int afs_deliver_fs_rename(struct afs_call *call)
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	/* unmarshall the reply once we've received all of it */
++	/* If the two dirs are the same, we have two copies of the same status
++	 * report, so we just decode it twice.
++	 */
+ 	bp = call->buffer;
+ 	ret = xdr_decode_AFSFetchStatus(&bp, call, call->out_dir_scb);
+ 	if (ret < 0)
+ 		return ret;
+-	if (call->out_dir_scb != call->out_scb) {
+-		ret = xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
+-		if (ret < 0)
+-			return ret;
+-	}
++	ret = xdr_decode_AFSFetchStatus(&bp, call, call->out_scb);
++	if (ret < 0)
++		return ret;
+ 	xdr_decode_AFSVolSync(&bp, call->out_volsync);
+ 
+ 	_leave(" = 0 [done]");
+diff --git a/fs/afs/yfsclient.c b/fs/afs/yfsclient.c
+index a26126ac7bf1..83b6d67325f6 100644
+--- a/fs/afs/yfsclient.c
++++ b/fs/afs/yfsclient.c
+@@ -186,13 +186,14 @@ static int xdr_decode_YFSFetchStatus(const __be32 **_bp,
+ 	const struct yfs_xdr_YFSFetchStatus *xdr = (const void *)*_bp;
+ 	struct afs_file_status *status = &scb->status;
+ 	u32 type;
++	int ret;
+ 
+ 	status->abort_code = ntohl(xdr->abort_code);
+ 	if (status->abort_code != 0) {
+ 		if (status->abort_code == VNOVNODE)
+ 			status->nlink = 0;
+ 		scb->have_error = true;
+-		return 0;
++		goto good;
+ 	}
+ 
+ 	type = ntohl(xdr->type);
+@@ -220,13 +221,16 @@ static int xdr_decode_YFSFetchStatus(const __be32 **_bp,
+ 	status->size		= xdr_to_u64(xdr->size);
+ 	status->data_version	= xdr_to_u64(xdr->data_version);
+ 	scb->have_status	= true;
+-
++good:
++	ret = 0;
++advance:
+ 	*_bp += xdr_size(xdr);
+-	return 0;
++	return ret;
+ 
+ bad:
+ 	xdr_dump_bad(*_bp);
+-	return afs_protocol_error(call, -EBADMSG, afs_eproto_bad_status);
++	ret = afs_protocol_error(call, -EBADMSG, afs_eproto_bad_status);
++	goto advance;
+ }
+ 
+ /*
+@@ -1153,11 +1157,9 @@ static int yfs_deliver_fs_rename(struct afs_call *call)
+ 	ret = xdr_decode_YFSFetchStatus(&bp, call, call->out_dir_scb);
+ 	if (ret < 0)
+ 		return ret;
+-	if (call->out_dir_scb != call->out_scb) {
+-		ret = xdr_decode_YFSFetchStatus(&bp, call, call->out_scb);
+-		if (ret < 0)
+-			return ret;
+-	}
++	ret = xdr_decode_YFSFetchStatus(&bp, call, call->out_scb);
++	if (ret < 0)
++		return ret;
+ 
+ 	xdr_decode_YFSVolSync(&bp, call->out_volsync);
+ 	_leave(" = 0 [done]");
+diff --git a/fs/block_dev.c b/fs/block_dev.c
+index 69bf2fb6f7cd..84fe0162ff13 100644
+--- a/fs/block_dev.c
++++ b/fs/block_dev.c
+@@ -34,6 +34,7 @@
+ #include <linux/task_io_accounting_ops.h>
+ #include <linux/falloc.h>
+ #include <linux/uaccess.h>
++#include <linux/suspend.h>
+ #include "internal.h"
+ 
+ struct bdev_inode {
+@@ -2001,7 +2002,8 @@ ssize_t blkdev_write_iter(struct kiocb *iocb, struct iov_iter *from)
+ 	if (bdev_read_only(I_BDEV(bd_inode)))
+ 		return -EPERM;
+ 
+-	if (IS_SWAPFILE(bd_inode))
++	/* uswsusp needs write permission to the swap */
++	if (IS_SWAPFILE(bd_inode) && !hibernation_available())
+ 		return -ETXTBSY;
+ 
+ 	if (!iov_iter_count(from))
+diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
+index 7f09147872dc..c9a3bbc8c6af 100644
+--- a/fs/btrfs/block-group.c
++++ b/fs/btrfs/block-group.c
+@@ -1987,6 +1987,7 @@ int btrfs_read_block_groups(struct btrfs_fs_info *info)
+ 		btrfs_release_path(path);
+ 	}
+ 
++	rcu_read_lock();
+ 	list_for_each_entry_rcu(space_info, &info->space_info, list) {
+ 		if (!(btrfs_get_alloc_profile(info, space_info->flags) &
+ 		      (BTRFS_BLOCK_GROUP_RAID10 |
+@@ -2007,6 +2008,7 @@ int btrfs_read_block_groups(struct btrfs_fs_info *info)
+ 				list)
+ 			inc_block_group_ro(cache, 1);
+ 	}
++	rcu_read_unlock();
+ 
+ 	btrfs_init_global_block_rsv(info);
+ 	ret = check_chunk_block_group_mappings(info);
+diff --git a/fs/buffer.c b/fs/buffer.c
+index b8d28370cfd7..a50d928af641 100644
+--- a/fs/buffer.c
++++ b/fs/buffer.c
+@@ -1377,6 +1377,17 @@ void __breadahead(struct block_device *bdev, sector_t block, unsigned size)
+ }
+ EXPORT_SYMBOL(__breadahead);
+ 
++void __breadahead_gfp(struct block_device *bdev, sector_t block, unsigned size,
++		      gfp_t gfp)
++{
++	struct buffer_head *bh = __getblk_gfp(bdev, block, size, gfp);
++	if (likely(bh)) {
++		ll_rw_block(REQ_OP_READ, REQ_RAHEAD, 1, &bh);
++		brelse(bh);
++	}
++}
++EXPORT_SYMBOL(__breadahead_gfp);
++
+ /**
+  *  __bread_gfp() - reads a specified block and returns the bh
+  *  @bdev: the block_device to read from
+diff --git a/fs/ceph/file.c b/fs/ceph/file.c
+index 5a478cd06e11..7f8c4e308301 100644
+--- a/fs/ceph/file.c
++++ b/fs/ceph/file.c
+@@ -1944,6 +1944,71 @@ static int is_file_size_ok(struct inode *src_inode, struct inode *dst_inode,
+ 	return 0;
+ }
+ 
++static ssize_t ceph_do_objects_copy(struct ceph_inode_info *src_ci, u64 *src_off,
++				    struct ceph_inode_info *dst_ci, u64 *dst_off,
++				    struct ceph_fs_client *fsc,
++				    size_t len, unsigned int flags)
++{
++	struct ceph_object_locator src_oloc, dst_oloc;
++	struct ceph_object_id src_oid, dst_oid;
++	size_t bytes = 0;
++	u64 src_objnum, src_objoff, dst_objnum, dst_objoff;
++	u32 src_objlen, dst_objlen;
++	u32 object_size = src_ci->i_layout.object_size;
++	int ret;
++
++	src_oloc.pool = src_ci->i_layout.pool_id;
++	src_oloc.pool_ns = ceph_try_get_string(src_ci->i_layout.pool_ns);
++	dst_oloc.pool = dst_ci->i_layout.pool_id;
++	dst_oloc.pool_ns = ceph_try_get_string(dst_ci->i_layout.pool_ns);
++
++	while (len >= object_size) {
++		ceph_calc_file_object_mapping(&src_ci->i_layout, *src_off,
++					      object_size, &src_objnum,
++					      &src_objoff, &src_objlen);
++		ceph_calc_file_object_mapping(&dst_ci->i_layout, *dst_off,
++					      object_size, &dst_objnum,
++					      &dst_objoff, &dst_objlen);
++		ceph_oid_init(&src_oid);
++		ceph_oid_printf(&src_oid, "%llx.%08llx",
++				src_ci->i_vino.ino, src_objnum);
++		ceph_oid_init(&dst_oid);
++		ceph_oid_printf(&dst_oid, "%llx.%08llx",
++				dst_ci->i_vino.ino, dst_objnum);
++		/* Do an object remote copy */
++		ret = ceph_osdc_copy_from(&fsc->client->osdc,
++					  src_ci->i_vino.snap, 0,
++					  &src_oid, &src_oloc,
++					  CEPH_OSD_OP_FLAG_FADVISE_SEQUENTIAL |
++					  CEPH_OSD_OP_FLAG_FADVISE_NOCACHE,
++					  &dst_oid, &dst_oloc,
++					  CEPH_OSD_OP_FLAG_FADVISE_SEQUENTIAL |
++					  CEPH_OSD_OP_FLAG_FADVISE_DONTNEED,
++					  dst_ci->i_truncate_seq,
++					  dst_ci->i_truncate_size,
++					  CEPH_OSD_COPY_FROM_FLAG_TRUNCATE_SEQ);
++		if (ret) {
++			if (ret == -EOPNOTSUPP) {
++				fsc->have_copy_from2 = false;
++				pr_notice("OSDs don't support copy-from2; disabling copy offload\n");
++			}
++			dout("ceph_osdc_copy_from returned %d\n", ret);
++			if (!bytes)
++				bytes = ret;
++			goto out;
++		}
++		len -= object_size;
++		bytes += object_size;
++		*src_off += object_size;
++		*dst_off += object_size;
++	}
++
++out:
++	ceph_oloc_destroy(&src_oloc);
++	ceph_oloc_destroy(&dst_oloc);
++	return bytes;
++}
++
+ static ssize_t __ceph_copy_file_range(struct file *src_file, loff_t src_off,
+ 				      struct file *dst_file, loff_t dst_off,
+ 				      size_t len, unsigned int flags)
+@@ -1954,14 +2019,11 @@ static ssize_t __ceph_copy_file_range(struct file *src_file, loff_t src_off,
+ 	struct ceph_inode_info *dst_ci = ceph_inode(dst_inode);
+ 	struct ceph_cap_flush *prealloc_cf;
+ 	struct ceph_fs_client *src_fsc = ceph_inode_to_client(src_inode);
+-	struct ceph_object_locator src_oloc, dst_oloc;
+-	struct ceph_object_id src_oid, dst_oid;
+-	loff_t endoff = 0, size;
+-	ssize_t ret = -EIO;
++	loff_t size;
++	ssize_t ret = -EIO, bytes;
+ 	u64 src_objnum, dst_objnum, src_objoff, dst_objoff;
+-	u32 src_objlen, dst_objlen, object_size;
++	u32 src_objlen, dst_objlen;
+ 	int src_got = 0, dst_got = 0, err, dirty;
+-	bool do_final_copy = false;
+ 
+ 	if (src_inode->i_sb != dst_inode->i_sb) {
+ 		struct ceph_fs_client *dst_fsc = ceph_inode_to_client(dst_inode);
+@@ -2039,22 +2101,14 @@ static ssize_t __ceph_copy_file_range(struct file *src_file, loff_t src_off,
+ 	if (ret < 0)
+ 		goto out_caps;
+ 
+-	size = i_size_read(dst_inode);
+-	endoff = dst_off + len;
+-
+ 	/* Drop dst file cached pages */
+ 	ret = invalidate_inode_pages2_range(dst_inode->i_mapping,
+ 					    dst_off >> PAGE_SHIFT,
+-					    endoff >> PAGE_SHIFT);
++					    (dst_off + len) >> PAGE_SHIFT);
+ 	if (ret < 0) {
+ 		dout("Failed to invalidate inode pages (%zd)\n", ret);
+ 		ret = 0; /* XXX */
+ 	}
+-	src_oloc.pool = src_ci->i_layout.pool_id;
+-	src_oloc.pool_ns = ceph_try_get_string(src_ci->i_layout.pool_ns);
+-	dst_oloc.pool = dst_ci->i_layout.pool_id;
+-	dst_oloc.pool_ns = ceph_try_get_string(dst_ci->i_layout.pool_ns);
+-
+ 	ceph_calc_file_object_mapping(&src_ci->i_layout, src_off,
+ 				      src_ci->i_layout.object_size,
+ 				      &src_objnum, &src_objoff, &src_objlen);
+@@ -2073,6 +2127,8 @@ static ssize_t __ceph_copy_file_range(struct file *src_file, loff_t src_off,
+ 	 * starting at the src_off
+ 	 */
+ 	if (src_objoff) {
++		dout("Initial partial copy of %u bytes\n", src_objlen);
++
+ 		/*
+ 		 * we need to temporarily drop all caps as we'll be calling
+ 		 * {read,write}_iter, which will get caps again.
+@@ -2080,8 +2136,9 @@ static ssize_t __ceph_copy_file_range(struct file *src_file, loff_t src_off,
+ 		put_rd_wr_caps(src_ci, src_got, dst_ci, dst_got);
+ 		ret = do_splice_direct(src_file, &src_off, dst_file,
+ 				       &dst_off, src_objlen, flags);
+-		if (ret < 0) {
+-			dout("do_splice_direct returned %d\n", err);
++		/* Abort on short copies or on error */
++		if (ret < src_objlen) {
++			dout("Failed partial copy (%zd)\n", ret);
+ 			goto out;
+ 		}
+ 		len -= ret;
+@@ -2094,62 +2151,29 @@ static ssize_t __ceph_copy_file_range(struct file *src_file, loff_t src_off,
+ 		if (err < 0)
+ 			goto out_caps;
+ 	}
+-	object_size = src_ci->i_layout.object_size;
+-	while (len >= object_size) {
+-		ceph_calc_file_object_mapping(&src_ci->i_layout, src_off,
+-					      object_size, &src_objnum,
+-					      &src_objoff, &src_objlen);
+-		ceph_calc_file_object_mapping(&dst_ci->i_layout, dst_off,
+-					      object_size, &dst_objnum,
+-					      &dst_objoff, &dst_objlen);
+-		ceph_oid_init(&src_oid);
+-		ceph_oid_printf(&src_oid, "%llx.%08llx",
+-				src_ci->i_vino.ino, src_objnum);
+-		ceph_oid_init(&dst_oid);
+-		ceph_oid_printf(&dst_oid, "%llx.%08llx",
+-				dst_ci->i_vino.ino, dst_objnum);
+-		/* Do an object remote copy */
+-		err = ceph_osdc_copy_from(
+-			&src_fsc->client->osdc,
+-			src_ci->i_vino.snap, 0,
+-			&src_oid, &src_oloc,
+-			CEPH_OSD_OP_FLAG_FADVISE_SEQUENTIAL |
+-			CEPH_OSD_OP_FLAG_FADVISE_NOCACHE,
+-			&dst_oid, &dst_oloc,
+-			CEPH_OSD_OP_FLAG_FADVISE_SEQUENTIAL |
+-			CEPH_OSD_OP_FLAG_FADVISE_DONTNEED,
+-			dst_ci->i_truncate_seq, dst_ci->i_truncate_size,
+-			CEPH_OSD_COPY_FROM_FLAG_TRUNCATE_SEQ);
+-		if (err) {
+-			if (err == -EOPNOTSUPP) {
+-				src_fsc->have_copy_from2 = false;
+-				pr_notice("OSDs don't support copy-from2; disabling copy offload\n");
+-			}
+-			dout("ceph_osdc_copy_from returned %d\n", err);
+-			if (!ret)
+-				ret = err;
+-			goto out_caps;
+-		}
+-		len -= object_size;
+-		src_off += object_size;
+-		dst_off += object_size;
+-		ret += object_size;
+-	}
+ 
+-	if (len)
+-		/* We still need one final local copy */
+-		do_final_copy = true;
++	size = i_size_read(dst_inode);
++	bytes = ceph_do_objects_copy(src_ci, &src_off, dst_ci, &dst_off,
++				     src_fsc, len, flags);
++	if (bytes <= 0) {
++		if (!ret)
++			ret = bytes;
++		goto out_caps;
++	}
++	dout("Copied %zu bytes out of %zu\n", bytes, len);
++	len -= bytes;
++	ret += bytes;
+ 
+ 	file_update_time(dst_file);
+ 	inode_inc_iversion_raw(dst_inode);
+ 
+-	if (endoff > size) {
++	if (dst_off > size) {
+ 		int caps_flags = 0;
+ 
+ 		/* Let the MDS know about dst file size change */
+-		if (ceph_quota_is_max_bytes_approaching(dst_inode, endoff))
++		if (ceph_quota_is_max_bytes_approaching(dst_inode, dst_off))
+ 			caps_flags |= CHECK_CAPS_NODELAY;
+-		if (ceph_inode_set_size(dst_inode, endoff))
++		if (ceph_inode_set_size(dst_inode, dst_off))
+ 			caps_flags |= CHECK_CAPS_AUTHONLY;
+ 		if (caps_flags)
+ 			ceph_check_caps(dst_ci, caps_flags, NULL);
+@@ -2165,15 +2189,18 @@ static ssize_t __ceph_copy_file_range(struct file *src_file, loff_t src_off,
+ out_caps:
+ 	put_rd_wr_caps(src_ci, src_got, dst_ci, dst_got);
+ 
+-	if (do_final_copy) {
+-		err = do_splice_direct(src_file, &src_off, dst_file,
+-				       &dst_off, len, flags);
+-		if (err < 0) {
+-			dout("do_splice_direct returned %d\n", err);
+-			goto out;
+-		}
+-		len -= err;
+-		ret += err;
++	/*
++	 * Do the final manual copy if we still have some bytes left, unless
++	 * there were errors in remote object copies (len >= object_size).
++	 */
++	if (len && (len < src_ci->i_layout.object_size)) {
++		dout("Final partial copy of %zu bytes\n", len);
++		bytes = do_splice_direct(src_file, &src_off, dst_file,
++					 &dst_off, len, flags);
++		if (bytes > 0)
++			ret += bytes;
++		else
++			dout("Failed partial copy (%zd)\n", bytes);
+ 	}
+ 
+ out:
+diff --git a/fs/cifs/smb2misc.c b/fs/cifs/smb2misc.c
+index 0511aaf451d4..497afb0b9960 100644
+--- a/fs/cifs/smb2misc.c
++++ b/fs/cifs/smb2misc.c
+@@ -766,6 +766,20 @@ smb2_handle_cancelled_close(struct cifs_tcon *tcon, __u64 persistent_fid,
+ 
+ 	cifs_dbg(FYI, "%s: tc_count=%d\n", __func__, tcon->tc_count);
+ 	spin_lock(&cifs_tcp_ses_lock);
++	if (tcon->tc_count <= 0) {
++		struct TCP_Server_Info *server = NULL;
++
++		WARN_ONCE(tcon->tc_count < 0, "tcon refcount is negative");
++		spin_unlock(&cifs_tcp_ses_lock);
++
++		if (tcon->ses)
++			server = tcon->ses->server;
++
++		cifs_server_dbg(FYI, "tid=%u: tcon is closing, skipping async close retry of fid %llu %llu\n",
++				tcon->tid, persistent_fid, volatile_fid);
++
++		return 0;
++	}
+ 	tcon->tc_count++;
+ 	spin_unlock(&cifs_tcp_ses_lock);
+ 
+diff --git a/fs/cifs/transport.c b/fs/cifs/transport.c
+index cb3ee916f527..c97570eb2c18 100644
+--- a/fs/cifs/transport.c
++++ b/fs/cifs/transport.c
+@@ -466,7 +466,7 @@ smb_send_rqst(struct TCP_Server_Info *server, int num_rqst,
+ 	      struct smb_rqst *rqst, int flags)
+ {
+ 	struct kvec iov;
+-	struct smb2_transform_hdr tr_hdr;
++	struct smb2_transform_hdr *tr_hdr;
+ 	struct smb_rqst cur_rqst[MAX_COMPOUND];
+ 	int rc;
+ 
+@@ -476,28 +476,34 @@ smb_send_rqst(struct TCP_Server_Info *server, int num_rqst,
+ 	if (num_rqst > MAX_COMPOUND - 1)
+ 		return -ENOMEM;
+ 
+-	memset(&cur_rqst[0], 0, sizeof(cur_rqst));
+-	memset(&iov, 0, sizeof(iov));
+-	memset(&tr_hdr, 0, sizeof(tr_hdr));
+-
+-	iov.iov_base = &tr_hdr;
+-	iov.iov_len = sizeof(tr_hdr);
+-	cur_rqst[0].rq_iov = &iov;
+-	cur_rqst[0].rq_nvec = 1;
+-
+ 	if (!server->ops->init_transform_rq) {
+ 		cifs_server_dbg(VFS, "Encryption requested but transform "
+ 				"callback is missing\n");
+ 		return -EIO;
+ 	}
+ 
++	tr_hdr = kmalloc(sizeof(*tr_hdr), GFP_NOFS);
++	if (!tr_hdr)
++		return -ENOMEM;
++
++	memset(&cur_rqst[0], 0, sizeof(cur_rqst));
++	memset(&iov, 0, sizeof(iov));
++	memset(tr_hdr, 0, sizeof(*tr_hdr));
++
++	iov.iov_base = tr_hdr;
++	iov.iov_len = sizeof(*tr_hdr);
++	cur_rqst[0].rq_iov = &iov;
++	cur_rqst[0].rq_nvec = 1;
++
+ 	rc = server->ops->init_transform_rq(server, num_rqst + 1,
+ 					    &cur_rqst[0], rqst);
+ 	if (rc)
+-		return rc;
++		goto out;
+ 
+ 	rc = __smb_send_rqst(server, num_rqst + 1, &cur_rqst[0]);
+ 	smb3_free_compound_rqst(num_rqst, &cur_rqst[1]);
++out:
++	kfree(tr_hdr);
+ 	return rc;
+ }
+ 
+diff --git a/fs/ext2/xattr.c b/fs/ext2/xattr.c
+index 0456bc990b5e..62acbe27d8bf 100644
+--- a/fs/ext2/xattr.c
++++ b/fs/ext2/xattr.c
+@@ -56,6 +56,7 @@
+ 
+ #include <linux/buffer_head.h>
+ #include <linux/init.h>
++#include <linux/printk.h>
+ #include <linux/slab.h>
+ #include <linux/mbcache.h>
+ #include <linux/quotaops.h>
+@@ -84,8 +85,8 @@
+ 		printk("\n"); \
+ 	} while (0)
+ #else
+-# define ea_idebug(f...)
+-# define ea_bdebug(f...)
++# define ea_idebug(inode, f...)	no_printk(f)
++# define ea_bdebug(bh, f...)	no_printk(f)
+ #endif
+ 
+ static int ext2_xattr_set2(struct inode *, struct buffer_head *,
+@@ -864,8 +865,7 @@ ext2_xattr_cache_insert(struct mb_cache *cache, struct buffer_head *bh)
+ 				      true);
+ 	if (error) {
+ 		if (error == -EBUSY) {
+-			ea_bdebug(bh, "already in cache (%d cache entries)",
+-				atomic_read(&ext2_xattr_cache->c_entry_count));
++			ea_bdebug(bh, "already in cache");
+ 			error = 0;
+ 		}
+ 	} else
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index c5d05564cd29..37f65ad0d823 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -4348,7 +4348,7 @@ make_io:
+ 			if (end > table)
+ 				end = table;
+ 			while (b <= end)
+-				sb_breadahead(sb, b++);
++				sb_breadahead_unmovable(sb, b++);
+ 		}
+ 
+ 		/*
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index 4f0444f3cda3..16da3b3481a4 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -372,7 +372,8 @@ static void save_error_info(struct super_block *sb, const char *func,
+ 			    unsigned int line)
+ {
+ 	__save_error_info(sb, func, line);
+-	ext4_commit_super(sb, 1);
++	if (!bdev_read_only(sb->s_bdev))
++		ext4_commit_super(sb, 1);
+ }
+ 
+ /*
+@@ -4331,7 +4332,7 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
+ 	/* Pre-read the descriptors into the buffer cache */
+ 	for (i = 0; i < db_count; i++) {
+ 		block = descriptor_loc(sb, logical_sb_block, i);
+-		sb_breadahead(sb, block);
++		sb_breadahead_unmovable(sb, block);
+ 	}
+ 
+ 	for (i = 0; i < db_count; i++) {
+diff --git a/fs/f2fs/checkpoint.c b/fs/f2fs/checkpoint.c
+index 44e84ac5c941..79aaf06004f6 100644
+--- a/fs/f2fs/checkpoint.c
++++ b/fs/f2fs/checkpoint.c
+@@ -1250,20 +1250,20 @@ static void unblock_operations(struct f2fs_sb_info *sbi)
+ 	f2fs_unlock_all(sbi);
+ }
+ 
+-void f2fs_wait_on_all_pages_writeback(struct f2fs_sb_info *sbi)
++void f2fs_wait_on_all_pages(struct f2fs_sb_info *sbi, int type)
+ {
+ 	DEFINE_WAIT(wait);
+ 
+ 	for (;;) {
+ 		prepare_to_wait(&sbi->cp_wait, &wait, TASK_UNINTERRUPTIBLE);
+ 
+-		if (!get_pages(sbi, F2FS_WB_CP_DATA))
++		if (!get_pages(sbi, type))
+ 			break;
+ 
+ 		if (unlikely(f2fs_cp_error(sbi)))
+ 			break;
+ 
+-		io_schedule_timeout(5*HZ);
++		io_schedule_timeout(HZ/50);
+ 	}
+ 	finish_wait(&sbi->cp_wait, &wait);
+ }
+@@ -1301,10 +1301,14 @@ static void update_ckpt_flags(struct f2fs_sb_info *sbi, struct cp_control *cpc)
+ 	else
+ 		__clear_ckpt_flags(ckpt, CP_ORPHAN_PRESENT_FLAG);
+ 
+-	if (is_sbi_flag_set(sbi, SBI_NEED_FSCK) ||
+-		is_sbi_flag_set(sbi, SBI_IS_RESIZEFS))
++	if (is_sbi_flag_set(sbi, SBI_NEED_FSCK))
+ 		__set_ckpt_flags(ckpt, CP_FSCK_FLAG);
+ 
++	if (is_sbi_flag_set(sbi, SBI_IS_RESIZEFS))
++		__set_ckpt_flags(ckpt, CP_RESIZEFS_FLAG);
++	else
++		__clear_ckpt_flags(ckpt, CP_RESIZEFS_FLAG);
++
+ 	if (is_sbi_flag_set(sbi, SBI_CP_DISABLED))
+ 		__set_ckpt_flags(ckpt, CP_DISABLED_FLAG);
+ 	else
+@@ -1384,8 +1388,6 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
+ 
+ 	/* Flush all the NAT/SIT pages */
+ 	f2fs_sync_meta_pages(sbi, META, LONG_MAX, FS_CP_META_IO);
+-	f2fs_bug_on(sbi, get_pages(sbi, F2FS_DIRTY_META) &&
+-					!f2fs_cp_error(sbi));
+ 
+ 	/*
+ 	 * modify checkpoint
+@@ -1493,11 +1495,11 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
+ 
+ 	/* Here, we have one bio having CP pack except cp pack 2 page */
+ 	f2fs_sync_meta_pages(sbi, META, LONG_MAX, FS_CP_META_IO);
+-	f2fs_bug_on(sbi, get_pages(sbi, F2FS_DIRTY_META) &&
+-					!f2fs_cp_error(sbi));
++	/* Wait for all dirty meta pages to be submitted for IO */
++	f2fs_wait_on_all_pages(sbi, F2FS_DIRTY_META);
+ 
+ 	/* wait for previous submitted meta pages writeback */
+-	f2fs_wait_on_all_pages_writeback(sbi);
++	f2fs_wait_on_all_pages(sbi, F2FS_WB_CP_DATA);
+ 
+ 	/* flush all device cache */
+ 	err = f2fs_flush_device_cache(sbi);
+@@ -1506,7 +1508,7 @@ static int do_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc)
+ 
+ 	/* barrier and flush checkpoint cp pack 2 page if it can */
+ 	commit_checkpoint(sbi, ckpt, start_blk);
+-	f2fs_wait_on_all_pages_writeback(sbi);
++	f2fs_wait_on_all_pages(sbi, F2FS_WB_CP_DATA);
+ 
+ 	/*
+ 	 * invalidate intermediate page cache borrowed from meta inode which are
+diff --git a/fs/f2fs/compress.c b/fs/f2fs/compress.c
+index d8a64be90a50..837e14b7ef52 100644
+--- a/fs/f2fs/compress.c
++++ b/fs/f2fs/compress.c
+@@ -385,16 +385,22 @@ static int f2fs_compress_pages(struct compress_ctx *cc)
+ 	for (i = 0; i < COMPRESS_DATA_RESERVED_SIZE; i++)
+ 		cc->cbuf->reserved[i] = cpu_to_le32(0);
+ 
++	nr_cpages = DIV_ROUND_UP(cc->clen + COMPRESS_HEADER_SIZE, PAGE_SIZE);
++
++	/* zero out any unused part of the last page */
++	memset(&cc->cbuf->cdata[cc->clen], 0,
++	       (nr_cpages * PAGE_SIZE) - (cc->clen + COMPRESS_HEADER_SIZE));
++
+ 	vunmap(cc->cbuf);
+ 	vunmap(cc->rbuf);
+ 
+-	nr_cpages = DIV_ROUND_UP(cc->clen + COMPRESS_HEADER_SIZE, PAGE_SIZE);
+-
+ 	for (i = nr_cpages; i < cc->nr_cpages; i++) {
+ 		f2fs_put_compressed_page(cc->cpages[i]);
+ 		cc->cpages[i] = NULL;
+ 	}
+ 
++	cops->destroy_compress_ctx(cc);
++
+ 	cc->nr_cpages = nr_cpages;
+ 
+ 	trace_f2fs_compress_pages_end(cc->inode, cc->cluster_idx,
+@@ -474,6 +480,8 @@ out_vunmap_cbuf:
+ out_vunmap_rbuf:
+ 	vunmap(dic->rbuf);
+ out_free_dic:
++	if (verity)
++		refcount_add(dic->nr_cpages - 1, &dic->ref);
+ 	if (!verity)
+ 		f2fs_decompress_end_io(dic->rpages, dic->cluster_size,
+ 								ret, false);
+@@ -532,8 +540,7 @@ static bool __cluster_may_compress(struct compress_ctx *cc)
+ 	return true;
+ }
+ 
+-/* return # of compressed block addresses */
+-static int f2fs_compressed_blocks(struct compress_ctx *cc)
++static int __f2fs_cluster_blocks(struct compress_ctx *cc, bool compr)
+ {
+ 	struct dnode_of_data dn;
+ 	int ret;
+@@ -556,8 +563,13 @@ static int f2fs_compressed_blocks(struct compress_ctx *cc)
+ 
+ 			blkaddr = datablock_addr(dn.inode,
+ 					dn.node_page, dn.ofs_in_node + i);
+-			if (blkaddr != NULL_ADDR)
+-				ret++;
++			if (compr) {
++				if (__is_valid_data_blkaddr(blkaddr))
++					ret++;
++			} else {
++				if (blkaddr != NULL_ADDR)
++					ret++;
++			}
+ 		}
+ 	}
+ fail:
+@@ -565,6 +577,18 @@ fail:
+ 	return ret;
+ }
+ 
++/* return # of compressed blocks in compressed cluster */
++static int f2fs_compressed_blocks(struct compress_ctx *cc)
++{
++	return __f2fs_cluster_blocks(cc, true);
++}
++
++/* return # of valid blocks in compressed cluster */
++static int f2fs_cluster_blocks(struct compress_ctx *cc, bool compr)
++{
++	return __f2fs_cluster_blocks(cc, false);
++}
++
+ int f2fs_is_compressed_cluster(struct inode *inode, pgoff_t index)
+ {
+ 	struct compress_ctx cc = {
+@@ -574,7 +598,7 @@ int f2fs_is_compressed_cluster(struct inode *inode, pgoff_t index)
+ 		.cluster_idx = index >> F2FS_I(inode)->i_log_cluster_size,
+ 	};
+ 
+-	return f2fs_compressed_blocks(&cc);
++	return f2fs_cluster_blocks(&cc, false);
+ }
+ 
+ static bool cluster_may_compress(struct compress_ctx *cc)
+@@ -623,7 +647,7 @@ static int prepare_compress_overwrite(struct compress_ctx *cc,
+ 	bool prealloc;
+ 
+ retry:
+-	ret = f2fs_compressed_blocks(cc);
++	ret = f2fs_cluster_blocks(cc, false);
+ 	if (ret <= 0)
+ 		return ret;
+ 
+@@ -772,7 +796,6 @@ static int f2fs_write_compressed_pages(struct compress_ctx *cc,
+ 		.encrypted_page = NULL,
+ 		.compressed_page = NULL,
+ 		.submitted = false,
+-		.need_lock = LOCK_RETRY,
+ 		.io_type = io_type,
+ 		.io_wbc = wbc,
+ 		.encrypted = f2fs_encrypted_file(cc->inode),
+@@ -785,9 +808,10 @@ static int f2fs_write_compressed_pages(struct compress_ctx *cc,
+ 	loff_t psize;
+ 	int i, err;
+ 
+-	set_new_dnode(&dn, cc->inode, NULL, NULL, 0);
++	if (!f2fs_trylock_op(sbi))
++		return -EAGAIN;
+ 
+-	f2fs_lock_op(sbi);
++	set_new_dnode(&dn, cc->inode, NULL, NULL, 0);
+ 
+ 	err = f2fs_get_dnode_of_data(&dn, start_idx, LOOKUP_NODE);
+ 	if (err)
+@@ -845,7 +869,7 @@ static int f2fs_write_compressed_pages(struct compress_ctx *cc,
+ 
+ 		blkaddr = datablock_addr(dn.inode, dn.node_page,
+ 							dn.ofs_in_node);
+-		fio.page = cic->rpages[i];
++		fio.page = cc->rpages[i];
+ 		fio.old_blkaddr = blkaddr;
+ 
+ 		/* cluster header */
+@@ -984,6 +1008,15 @@ retry_write:
+ 				unlock_page(cc->rpages[i]);
+ 				ret = 0;
+ 			} else if (ret == -EAGAIN) {
++				/*
++				 * for quota file, just redirty left pages to
++				 * avoid deadlock caused by cluster update race
++				 * from foreground operation.
++				 */
++				if (IS_NOQUOTA(cc->inode)) {
++					err = 0;
++					goto out_err;
++				}
+ 				ret = 0;
+ 				cond_resched();
+ 				congestion_wait(BLK_RW_ASYNC, HZ/50);
+@@ -992,16 +1025,12 @@ retry_write:
+ 				goto retry_write;
+ 			}
+ 			err = ret;
+-			goto out_fail;
++			goto out_err;
+ 		}
+ 
+ 		*submitted += _submitted;
+ 	}
+ 	return 0;
+-
+-out_fail:
+-	/* TODO: revoke partially updated block addresses */
+-	BUG_ON(compr_blocks);
+ out_err:
+ 	for (++i; i < cc->cluster_size; i++) {
+ 		if (!cc->rpages[i])
+diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c
+index b27b72107911..34990866cfe9 100644
+--- a/fs/f2fs/data.c
++++ b/fs/f2fs/data.c
+@@ -191,12 +191,37 @@ static void f2fs_verify_pages(struct page **rpages, unsigned int cluster_size)
+ 
+ static void f2fs_verify_bio(struct bio *bio)
+ {
+-	struct page *page = bio_first_page_all(bio);
+-	struct decompress_io_ctx *dic =
+-			(struct decompress_io_ctx *)page_private(page);
++	struct bio_vec *bv;
++	struct bvec_iter_all iter_all;
++
++	bio_for_each_segment_all(bv, bio, iter_all) {
++		struct page *page = bv->bv_page;
++		struct decompress_io_ctx *dic;
++
++		dic = (struct decompress_io_ctx *)page_private(page);
++
++		if (dic) {
++			if (refcount_dec_not_one(&dic->ref))
++				continue;
++			f2fs_verify_pages(dic->rpages,
++						dic->cluster_size);
++			f2fs_free_dic(dic);
++			continue;
++		}
++
++		if (bio->bi_status || PageError(page))
++			goto clear_uptodate;
+ 
+-	f2fs_verify_pages(dic->rpages, dic->cluster_size);
+-	f2fs_free_dic(dic);
++		if (fsverity_verify_page(page)) {
++			SetPageUptodate(page);
++			goto unlock;
++		}
++clear_uptodate:
++		ClearPageUptodate(page);
++		ClearPageError(page);
++unlock:
++		unlock_page(page);
++	}
+ }
+ #endif
+ 
+diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h
+index 5355be6b6755..71801a1709f0 100644
+--- a/fs/f2fs/f2fs.h
++++ b/fs/f2fs/f2fs.h
+@@ -100,6 +100,7 @@ extern const char *f2fs_fault_name[FAULT_MAX];
+ #define F2FS_MOUNT_INLINE_XATTR_SIZE	0x00800000
+ #define F2FS_MOUNT_RESERVE_ROOT		0x01000000
+ #define F2FS_MOUNT_DISABLE_CHECKPOINT	0x02000000
++#define F2FS_MOUNT_NORECOVERY		0x04000000
+ 
+ #define F2FS_OPTION(sbi)	((sbi)->mount_opt)
+ #define clear_opt(sbi, option)	(F2FS_OPTION(sbi).opt &= ~F2FS_MOUNT_##option)
+@@ -675,6 +676,44 @@ enum {
+ 	MAX_GC_FAILURE
+ };
+ 
++/* used for f2fs_inode_info->flags */
++enum {
++	FI_NEW_INODE,		/* indicate newly allocated inode */
++	FI_DIRTY_INODE,		/* indicate inode is dirty or not */
++	FI_AUTO_RECOVER,	/* indicate inode is recoverable */
++	FI_DIRTY_DIR,		/* indicate directory has dirty pages */
++	FI_INC_LINK,		/* need to increment i_nlink */
++	FI_ACL_MODE,		/* indicate acl mode */
++	FI_NO_ALLOC,		/* should not allocate any blocks */
++	FI_FREE_NID,		/* free allocated nide */
++	FI_NO_EXTENT,		/* not to use the extent cache */
++	FI_INLINE_XATTR,	/* used for inline xattr */
++	FI_INLINE_DATA,		/* used for inline data*/
++	FI_INLINE_DENTRY,	/* used for inline dentry */
++	FI_APPEND_WRITE,	/* inode has appended data */
++	FI_UPDATE_WRITE,	/* inode has in-place-update data */
++	FI_NEED_IPU,		/* used for ipu per file */
++	FI_ATOMIC_FILE,		/* indicate atomic file */
++	FI_ATOMIC_COMMIT,	/* indicate the state of atomical committing */
++	FI_VOLATILE_FILE,	/* indicate volatile file */
++	FI_FIRST_BLOCK_WRITTEN,	/* indicate #0 data block was written */
++	FI_DROP_CACHE,		/* drop dirty page cache */
++	FI_DATA_EXIST,		/* indicate data exists */
++	FI_INLINE_DOTS,		/* indicate inline dot dentries */
++	FI_DO_DEFRAG,		/* indicate defragment is running */
++	FI_DIRTY_FILE,		/* indicate regular/symlink has dirty pages */
++	FI_NO_PREALLOC,		/* indicate skipped preallocated blocks */
++	FI_HOT_DATA,		/* indicate file is hot */
++	FI_EXTRA_ATTR,		/* indicate file has extra attribute */
++	FI_PROJ_INHERIT,	/* indicate file inherits projectid */
++	FI_PIN_FILE,		/* indicate file should not be gced */
++	FI_ATOMIC_REVOKE_REQUEST, /* request to drop atomic data */
++	FI_VERITY_IN_PROGRESS,	/* building fs-verity Merkle tree */
++	FI_COMPRESSED_FILE,	/* indicate file's data can be compressed */
++	FI_MMAP_FILE,		/* indicate file was mmapped */
++	FI_MAX,			/* max flag, never be used */
++};
++
+ struct f2fs_inode_info {
+ 	struct inode vfs_inode;		/* serve a vfs inode */
+ 	unsigned long i_flags;		/* keep an inode flags for ioctl */
+@@ -687,7 +726,7 @@ struct f2fs_inode_info {
+ 	umode_t i_acl_mode;		/* keep file acl mode temporarily */
+ 
+ 	/* Use below internally in f2fs*/
+-	unsigned long flags;		/* use to pass per-file flags */
++	unsigned long flags[BITS_TO_LONGS(FI_MAX)];	/* use to pass per-file flags */
+ 	struct rw_semaphore i_sem;	/* protect fi info */
+ 	atomic_t dirty_pages;		/* # of dirty pages */
+ 	f2fs_hash_t chash;		/* hash value of given file name */
+@@ -2497,43 +2536,6 @@ static inline __u32 f2fs_mask_flags(umode_t mode, __u32 flags)
+ 		return flags & F2FS_OTHER_FLMASK;
+ }
+ 
+-/* used for f2fs_inode_info->flags */
+-enum {
+-	FI_NEW_INODE,		/* indicate newly allocated inode */
+-	FI_DIRTY_INODE,		/* indicate inode is dirty or not */
+-	FI_AUTO_RECOVER,	/* indicate inode is recoverable */
+-	FI_DIRTY_DIR,		/* indicate directory has dirty pages */
+-	FI_INC_LINK,		/* need to increment i_nlink */
+-	FI_ACL_MODE,		/* indicate acl mode */
+-	FI_NO_ALLOC,		/* should not allocate any blocks */
+-	FI_FREE_NID,		/* free allocated nide */
+-	FI_NO_EXTENT,		/* not to use the extent cache */
+-	FI_INLINE_XATTR,	/* used for inline xattr */
+-	FI_INLINE_DATA,		/* used for inline data*/
+-	FI_INLINE_DENTRY,	/* used for inline dentry */
+-	FI_APPEND_WRITE,	/* inode has appended data */
+-	FI_UPDATE_WRITE,	/* inode has in-place-update data */
+-	FI_NEED_IPU,		/* used for ipu per file */
+-	FI_ATOMIC_FILE,		/* indicate atomic file */
+-	FI_ATOMIC_COMMIT,	/* indicate the state of atomical committing */
+-	FI_VOLATILE_FILE,	/* indicate volatile file */
+-	FI_FIRST_BLOCK_WRITTEN,	/* indicate #0 data block was written */
+-	FI_DROP_CACHE,		/* drop dirty page cache */
+-	FI_DATA_EXIST,		/* indicate data exists */
+-	FI_INLINE_DOTS,		/* indicate inline dot dentries */
+-	FI_DO_DEFRAG,		/* indicate defragment is running */
+-	FI_DIRTY_FILE,		/* indicate regular/symlink has dirty pages */
+-	FI_NO_PREALLOC,		/* indicate skipped preallocated blocks */
+-	FI_HOT_DATA,		/* indicate file is hot */
+-	FI_EXTRA_ATTR,		/* indicate file has extra attribute */
+-	FI_PROJ_INHERIT,	/* indicate file inherits projectid */
+-	FI_PIN_FILE,		/* indicate file should not be gced */
+-	FI_ATOMIC_REVOKE_REQUEST, /* request to drop atomic data */
+-	FI_VERITY_IN_PROGRESS,	/* building fs-verity Merkle tree */
+-	FI_COMPRESSED_FILE,	/* indicate file's data can be compressed */
+-	FI_MMAP_FILE,		/* indicate file was mmapped */
+-};
+-
+ static inline void __mark_inode_dirty_flag(struct inode *inode,
+ 						int flag, bool set)
+ {
+@@ -2555,20 +2557,18 @@ static inline void __mark_inode_dirty_flag(struct inode *inode,
+ 
+ static inline void set_inode_flag(struct inode *inode, int flag)
+ {
+-	if (!test_bit(flag, &F2FS_I(inode)->flags))
+-		set_bit(flag, &F2FS_I(inode)->flags);
++	test_and_set_bit(flag, F2FS_I(inode)->flags);
+ 	__mark_inode_dirty_flag(inode, flag, true);
+ }
+ 
+ static inline int is_inode_flag_set(struct inode *inode, int flag)
+ {
+-	return test_bit(flag, &F2FS_I(inode)->flags);
++	return test_bit(flag, F2FS_I(inode)->flags);
+ }
+ 
+ static inline void clear_inode_flag(struct inode *inode, int flag)
+ {
+-	if (test_bit(flag, &F2FS_I(inode)->flags))
+-		clear_bit(flag, &F2FS_I(inode)->flags);
++	test_and_clear_bit(flag, F2FS_I(inode)->flags);
+ 	__mark_inode_dirty_flag(inode, flag, false);
+ }
+ 
+@@ -2659,19 +2659,19 @@ static inline void get_inline_info(struct inode *inode, struct f2fs_inode *ri)
+ 	struct f2fs_inode_info *fi = F2FS_I(inode);
+ 
+ 	if (ri->i_inline & F2FS_INLINE_XATTR)
+-		set_bit(FI_INLINE_XATTR, &fi->flags);
++		set_bit(FI_INLINE_XATTR, fi->flags);
+ 	if (ri->i_inline & F2FS_INLINE_DATA)
+-		set_bit(FI_INLINE_DATA, &fi->flags);
++		set_bit(FI_INLINE_DATA, fi->flags);
+ 	if (ri->i_inline & F2FS_INLINE_DENTRY)
+-		set_bit(FI_INLINE_DENTRY, &fi->flags);
++		set_bit(FI_INLINE_DENTRY, fi->flags);
+ 	if (ri->i_inline & F2FS_DATA_EXIST)
+-		set_bit(FI_DATA_EXIST, &fi->flags);
++		set_bit(FI_DATA_EXIST, fi->flags);
+ 	if (ri->i_inline & F2FS_INLINE_DOTS)
+-		set_bit(FI_INLINE_DOTS, &fi->flags);
++		set_bit(FI_INLINE_DOTS, fi->flags);
+ 	if (ri->i_inline & F2FS_EXTRA_ATTR)
+-		set_bit(FI_EXTRA_ATTR, &fi->flags);
++		set_bit(FI_EXTRA_ATTR, fi->flags);
+ 	if (ri->i_inline & F2FS_PIN_FILE)
+-		set_bit(FI_PIN_FILE, &fi->flags);
++		set_bit(FI_PIN_FILE, fi->flags);
+ }
+ 
+ static inline void set_raw_inline(struct inode *inode, struct f2fs_inode *ri)
+@@ -3308,7 +3308,7 @@ int f2fs_get_valid_checkpoint(struct f2fs_sb_info *sbi);
+ void f2fs_update_dirty_page(struct inode *inode, struct page *page);
+ void f2fs_remove_dirty_inode(struct inode *inode);
+ int f2fs_sync_dirty_inodes(struct f2fs_sb_info *sbi, enum inode_type type);
+-void f2fs_wait_on_all_pages_writeback(struct f2fs_sb_info *sbi);
++void f2fs_wait_on_all_pages(struct f2fs_sb_info *sbi, int type);
+ int f2fs_write_checkpoint(struct f2fs_sb_info *sbi, struct cp_control *cpc);
+ void f2fs_init_ino_entry_info(struct f2fs_sb_info *sbi);
+ int __init f2fs_create_checkpoint_caches(void);
+diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c
+index 0d4da644df3b..a41c633ac6cf 100644
+--- a/fs/f2fs/file.c
++++ b/fs/f2fs/file.c
+@@ -1787,12 +1787,15 @@ static int f2fs_file_flush(struct file *file, fl_owner_t id)
+ static int f2fs_setflags_common(struct inode *inode, u32 iflags, u32 mask)
+ {
+ 	struct f2fs_inode_info *fi = F2FS_I(inode);
++	u32 masked_flags = fi->i_flags & mask;
++
++	f2fs_bug_on(F2FS_I_SB(inode), (iflags & ~mask));
+ 
+ 	/* Is it quota file? Do not allow user to mess with it */
+ 	if (IS_NOQUOTA(inode))
+ 		return -EPERM;
+ 
+-	if ((iflags ^ fi->i_flags) & F2FS_CASEFOLD_FL) {
++	if ((iflags ^ masked_flags) & F2FS_CASEFOLD_FL) {
+ 		if (!f2fs_sb_has_casefold(F2FS_I_SB(inode)))
+ 			return -EOPNOTSUPP;
+ 		if (!f2fs_empty_dir(inode))
+@@ -1806,9 +1809,9 @@ static int f2fs_setflags_common(struct inode *inode, u32 iflags, u32 mask)
+ 			return -EINVAL;
+ 	}
+ 
+-	if ((iflags ^ fi->i_flags) & F2FS_COMPR_FL) {
++	if ((iflags ^ masked_flags) & F2FS_COMPR_FL) {
+ 		if (S_ISREG(inode->i_mode) &&
+-			(fi->i_flags & F2FS_COMPR_FL || i_size_read(inode) ||
++			(masked_flags & F2FS_COMPR_FL || i_size_read(inode) ||
+ 						F2FS_HAS_BLOCKS(inode)))
+ 			return -EINVAL;
+ 		if (iflags & F2FS_NOCOMP_FL)
+@@ -1825,8 +1828,8 @@ static int f2fs_setflags_common(struct inode *inode, u32 iflags, u32 mask)
+ 			set_compress_context(inode);
+ 		}
+ 	}
+-	if ((iflags ^ fi->i_flags) & F2FS_NOCOMP_FL) {
+-		if (fi->i_flags & F2FS_COMPR_FL)
++	if ((iflags ^ masked_flags) & F2FS_NOCOMP_FL) {
++		if (masked_flags & F2FS_COMPR_FL)
+ 			return -EINVAL;
+ 	}
+ 
+diff --git a/fs/f2fs/gc.c b/fs/f2fs/gc.c
+index db8725d473b5..3cced15efebc 100644
+--- a/fs/f2fs/gc.c
++++ b/fs/f2fs/gc.c
+@@ -1018,8 +1018,8 @@ next_step:
+ 		 * race condition along with SSR block allocation.
+ 		 */
+ 		if ((gc_type == BG_GC && has_not_enough_free_secs(sbi, 0, 0)) ||
+-				get_valid_blocks(sbi, segno, false) ==
+-							sbi->blocks_per_seg)
++				get_valid_blocks(sbi, segno, true) ==
++							BLKS_PER_SEC(sbi))
+ 			return submitted;
+ 
+ 		if (check_valid_map(sbi, segno, off) == 0)
+@@ -1434,12 +1434,19 @@ static int free_segment_range(struct f2fs_sb_info *sbi, unsigned int start,
+ static void update_sb_metadata(struct f2fs_sb_info *sbi, int secs)
+ {
+ 	struct f2fs_super_block *raw_sb = F2FS_RAW_SUPER(sbi);
+-	int section_count = le32_to_cpu(raw_sb->section_count);
+-	int segment_count = le32_to_cpu(raw_sb->segment_count);
+-	int segment_count_main = le32_to_cpu(raw_sb->segment_count_main);
+-	long long block_count = le64_to_cpu(raw_sb->block_count);
++	int section_count;
++	int segment_count;
++	int segment_count_main;
++	long long block_count;
+ 	int segs = secs * sbi->segs_per_sec;
+ 
++	down_write(&sbi->sb_lock);
++
++	section_count = le32_to_cpu(raw_sb->section_count);
++	segment_count = le32_to_cpu(raw_sb->segment_count);
++	segment_count_main = le32_to_cpu(raw_sb->segment_count_main);
++	block_count = le64_to_cpu(raw_sb->block_count);
++
+ 	raw_sb->section_count = cpu_to_le32(section_count + secs);
+ 	raw_sb->segment_count = cpu_to_le32(segment_count + segs);
+ 	raw_sb->segment_count_main = cpu_to_le32(segment_count_main + segs);
+@@ -1453,6 +1460,8 @@ static void update_sb_metadata(struct f2fs_sb_info *sbi, int secs)
+ 		raw_sb->devs[last_dev].total_segments =
+ 						cpu_to_le32(dev_segs + segs);
+ 	}
++
++	up_write(&sbi->sb_lock);
+ }
+ 
+ static void update_fs_metadata(struct f2fs_sb_info *sbi, int secs)
+@@ -1570,11 +1579,17 @@ int f2fs_resize_fs(struct f2fs_sb_info *sbi, __u64 block_count)
+ 		goto out;
+ 	}
+ 
++	mutex_lock(&sbi->cp_mutex);
+ 	update_fs_metadata(sbi, -secs);
+ 	clear_sbi_flag(sbi, SBI_IS_RESIZEFS);
++	set_sbi_flag(sbi, SBI_IS_DIRTY);
++	mutex_unlock(&sbi->cp_mutex);
++
+ 	err = f2fs_sync_fs(sbi->sb, 1);
+ 	if (err) {
++		mutex_lock(&sbi->cp_mutex);
+ 		update_fs_metadata(sbi, secs);
++		mutex_unlock(&sbi->cp_mutex);
+ 		update_sb_metadata(sbi, secs);
+ 		f2fs_commit_super(sbi, false);
+ 	}
+diff --git a/fs/f2fs/inode.c b/fs/f2fs/inode.c
+index 78c3f1d70f1d..901e9f4ce12b 100644
+--- a/fs/f2fs/inode.c
++++ b/fs/f2fs/inode.c
+@@ -345,7 +345,7 @@ static int do_read_inode(struct inode *inode)
+ 	fi->i_flags = le32_to_cpu(ri->i_flags);
+ 	if (S_ISREG(inode->i_mode))
+ 		fi->i_flags &= ~F2FS_PROJINHERIT_FL;
+-	fi->flags = 0;
++	bitmap_zero(fi->flags, FI_MAX);
+ 	fi->i_advise = ri->i_advise;
+ 	fi->i_pino = le32_to_cpu(ri->i_pino);
+ 	fi->i_dir_level = ri->i_dir_level;
+diff --git a/fs/f2fs/node.c b/fs/f2fs/node.c
+index 9d02cdcdbb07..e58c4c628834 100644
+--- a/fs/f2fs/node.c
++++ b/fs/f2fs/node.c
+@@ -1562,15 +1562,16 @@ static int __write_node_page(struct page *page, bool atomic, bool *submitted,
+ 	if (atomic && !test_opt(sbi, NOBARRIER))
+ 		fio.op_flags |= REQ_PREFLUSH | REQ_FUA;
+ 
+-	set_page_writeback(page);
+-	ClearPageError(page);
+-
++	/* should add to global list before clearing PAGECACHE status */
+ 	if (f2fs_in_warm_node_list(sbi, page)) {
+ 		seq = f2fs_add_fsync_node_entry(sbi, page);
+ 		if (seq_id)
+ 			*seq_id = seq;
+ 	}
+ 
++	set_page_writeback(page);
++	ClearPageError(page);
++
+ 	fio.old_blkaddr = ni.blk_addr;
+ 	f2fs_do_write_node_page(nid, &fio);
+ 	set_node_addr(sbi, &ni, fio.new_blkaddr, is_fsync_dnode(page));
+diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c
+index 65a7a432dfee..8deb0a260d92 100644
+--- a/fs/f2fs/super.c
++++ b/fs/f2fs/super.c
+@@ -446,7 +446,7 @@ static int parse_options(struct super_block *sb, char *options)
+ 			break;
+ 		case Opt_norecovery:
+ 			/* this option mounts f2fs with ro */
+-			set_opt(sbi, DISABLE_ROLL_FORWARD);
++			set_opt(sbi, NORECOVERY);
+ 			if (!f2fs_readonly(sb))
+ 				return -EINVAL;
+ 			break;
+@@ -1172,7 +1172,7 @@ static void f2fs_put_super(struct super_block *sb)
+ 	/* our cp_error case, we can wait for any writeback page */
+ 	f2fs_flush_merged_writes(sbi);
+ 
+-	f2fs_wait_on_all_pages_writeback(sbi);
++	f2fs_wait_on_all_pages(sbi, F2FS_WB_CP_DATA);
+ 
+ 	f2fs_bug_on(sbi, sbi->fsync_node_num);
+ 
+@@ -1446,6 +1446,8 @@ static int f2fs_show_options(struct seq_file *seq, struct dentry *root)
+ 	}
+ 	if (test_opt(sbi, DISABLE_ROLL_FORWARD))
+ 		seq_puts(seq, ",disable_roll_forward");
++	if (test_opt(sbi, NORECOVERY))
++		seq_puts(seq, ",norecovery");
+ 	if (test_opt(sbi, DISCARD))
+ 		seq_puts(seq, ",discard");
+ 	else
+@@ -1927,6 +1929,7 @@ static ssize_t f2fs_quota_write(struct super_block *sb, int type,
+ 	int offset = off & (sb->s_blocksize - 1);
+ 	size_t towrite = len;
+ 	struct page *page;
++	void *fsdata = NULL;
+ 	char *kaddr;
+ 	int err = 0;
+ 	int tocopy;
+@@ -1936,7 +1939,7 @@ static ssize_t f2fs_quota_write(struct super_block *sb, int type,
+ 								towrite);
+ retry:
+ 		err = a_ops->write_begin(NULL, mapping, off, tocopy, 0,
+-							&page, NULL);
++							&page, &fsdata);
+ 		if (unlikely(err)) {
+ 			if (err == -ENOMEM) {
+ 				congestion_wait(BLK_RW_ASYNC, HZ/50);
+@@ -1952,7 +1955,7 @@ retry:
+ 		flush_dcache_page(page);
+ 
+ 		a_ops->write_end(NULL, mapping, off, tocopy, tocopy,
+-						page, NULL);
++						page, fsdata);
+ 		offset = 0;
+ 		towrite -= tocopy;
+ 		off += tocopy;
+@@ -3598,7 +3601,8 @@ try_onemore:
+ 		goto reset_checkpoint;
+ 
+ 	/* recover fsynced data */
+-	if (!test_opt(sbi, DISABLE_ROLL_FORWARD)) {
++	if (!test_opt(sbi, DISABLE_ROLL_FORWARD) &&
++			!test_opt(sbi, NORECOVERY)) {
+ 		/*
+ 		 * mount should be failed, when device has readonly mode, and
+ 		 * previous checkpoint was not done by clean system shutdown.
+diff --git a/fs/gfs2/log.c b/fs/gfs2/log.c
+index 08dd6a430234..60d911e293e6 100644
+--- a/fs/gfs2/log.c
++++ b/fs/gfs2/log.c
+@@ -104,16 +104,22 @@ __acquires(&sdp->sd_ail_lock)
+ 		gfs2_assert(sdp, bd->bd_tr == tr);
+ 
+ 		if (!buffer_busy(bh)) {
+-			if (!buffer_uptodate(bh) &&
+-			    !test_and_set_bit(SDF_AIL1_IO_ERROR,
++			if (buffer_uptodate(bh)) {
++				list_move(&bd->bd_ail_st_list,
++					  &tr->tr_ail2_list);
++				continue;
++			}
++			if (!test_and_set_bit(SDF_AIL1_IO_ERROR,
+ 					      &sdp->sd_flags)) {
+ 				gfs2_io_error_bh(sdp, bh);
+ 				*withdraw = true;
+ 			}
+-			list_move(&bd->bd_ail_st_list, &tr->tr_ail2_list);
+-			continue;
+ 		}
+ 
++		if (gfs2_withdrawn(sdp)) {
++			gfs2_remove_from_ail(bd);
++			continue;
++		}
+ 		if (!buffer_dirty(bh))
+ 			continue;
+ 		if (gl == bd->bd_gl)
+@@ -862,6 +868,8 @@ void gfs2_log_flush(struct gfs2_sbd *sdp, struct gfs2_glock *gl, u32 flags)
+ 				if (gfs2_ail1_empty(sdp))
+ 					break;
+ 			}
++			if (gfs2_withdrawn(sdp))
++				goto out;
+ 			atomic_dec(&sdp->sd_log_blks_free); /* Adjust for unreserved buffer */
+ 			trace_gfs2_log_blocks(sdp, -1);
+ 			log_write_header(sdp, flags);
+@@ -874,6 +882,7 @@ void gfs2_log_flush(struct gfs2_sbd *sdp, struct gfs2_glock *gl, u32 flags)
+ 			atomic_set(&sdp->sd_freeze_state, SFS_FROZEN);
+ 	}
+ 
++out:
+ 	trace_gfs2_log_flush(sdp, 0, flags);
+ 	up_write(&sdp->sd_log_flush_lock);
+ 
+diff --git a/fs/nfs/callback_proc.c b/fs/nfs/callback_proc.c
+index cd4c6bc81cae..40d31024b72d 100644
+--- a/fs/nfs/callback_proc.c
++++ b/fs/nfs/callback_proc.c
+@@ -128,6 +128,8 @@ static struct inode *nfs_layout_find_inode_by_stateid(struct nfs_client *clp,
+ 
+ 	list_for_each_entry_rcu(server, &clp->cl_superblocks, client_link) {
+ 		list_for_each_entry(lo, &server->layouts, plh_layouts) {
++			if (!pnfs_layout_is_valid(lo))
++				continue;
+ 			if (stateid != NULL &&
+ 			    !nfs4_stateid_match_other(stateid, &lo->plh_stateid))
+ 				continue;
+diff --git a/fs/nfs/direct.c b/fs/nfs/direct.c
+index b768a0b42e82..ade2435551c8 100644
+--- a/fs/nfs/direct.c
++++ b/fs/nfs/direct.c
+@@ -571,6 +571,7 @@ ssize_t nfs_file_direct_read(struct kiocb *iocb, struct iov_iter *iter)
+ 	l_ctx = nfs_get_lock_context(dreq->ctx);
+ 	if (IS_ERR(l_ctx)) {
+ 		result = PTR_ERR(l_ctx);
++		nfs_direct_req_release(dreq);
+ 		goto out_release;
+ 	}
+ 	dreq->l_ctx = l_ctx;
+@@ -990,6 +991,7 @@ ssize_t nfs_file_direct_write(struct kiocb *iocb, struct iov_iter *iter)
+ 	l_ctx = nfs_get_lock_context(dreq->ctx);
+ 	if (IS_ERR(l_ctx)) {
+ 		result = PTR_ERR(l_ctx);
++		nfs_direct_req_release(dreq);
+ 		goto out_release;
+ 	}
+ 	dreq->l_ctx = l_ctx;
+diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c
+index 11bf15800ac9..a10fb87c6ac3 100644
+--- a/fs/nfs/inode.c
++++ b/fs/nfs/inode.c
+@@ -959,16 +959,16 @@ struct nfs_open_context *alloc_nfs_open_context(struct dentry *dentry,
+ 						struct file *filp)
+ {
+ 	struct nfs_open_context *ctx;
+-	const struct cred *cred = get_current_cred();
+ 
+ 	ctx = kmalloc(sizeof(*ctx), GFP_KERNEL);
+-	if (!ctx) {
+-		put_cred(cred);
++	if (!ctx)
+ 		return ERR_PTR(-ENOMEM);
+-	}
+ 	nfs_sb_active(dentry->d_sb);
+ 	ctx->dentry = dget(dentry);
+-	ctx->cred = cred;
++	if (filp)
++		ctx->cred = get_cred(filp->f_cred);
++	else
++		ctx->cred = get_current_cred();
+ 	ctx->ll_cred = NULL;
+ 	ctx->state = NULL;
+ 	ctx->mode = f_mode;
+diff --git a/fs/nfs/nfs4file.c b/fs/nfs/nfs4file.c
+index 1297919e0fce..8e5d6223ddd3 100644
+--- a/fs/nfs/nfs4file.c
++++ b/fs/nfs/nfs4file.c
+@@ -252,6 +252,9 @@ static loff_t nfs42_remap_file_range(struct file *src_file, loff_t src_off,
+ 	if (remap_flags & ~REMAP_FILE_ADVISORY)
+ 		return -EINVAL;
+ 
++	if (IS_SWAPFILE(dst_inode) || IS_SWAPFILE(src_inode))
++		return -ETXTBSY;
++
+ 	/* check alignment w.r.t. clone_blksize */
+ 	ret = -EINVAL;
+ 	if (bs) {
+diff --git a/fs/nfs/nfsroot.c b/fs/nfs/nfsroot.c
+index effaa4247b91..8d3278805602 100644
+--- a/fs/nfs/nfsroot.c
++++ b/fs/nfs/nfsroot.c
+@@ -88,7 +88,7 @@
+ #define NFS_ROOT		"/tftpboot/%s"
+ 
+ /* Default NFSROOT mount options. */
+-#define NFS_DEF_OPTIONS		"vers=2,udp,rsize=4096,wsize=4096"
++#define NFS_DEF_OPTIONS		"vers=2,tcp,rsize=4096,wsize=4096"
+ 
+ /* Parameters passed from the kernel command line */
+ static char nfs_root_parms[NFS_MAXPATHLEN + 1] __initdata = "";
+diff --git a/fs/nfs/pagelist.c b/fs/nfs/pagelist.c
+index 8b7c525dbbf7..b736912098ee 100644
+--- a/fs/nfs/pagelist.c
++++ b/fs/nfs/pagelist.c
+@@ -886,15 +886,6 @@ static void nfs_pageio_setup_mirroring(struct nfs_pageio_descriptor *pgio,
+ 	pgio->pg_mirror_count = mirror_count;
+ }
+ 
+-/*
+- * nfs_pageio_stop_mirroring - stop using mirroring (set mirror count to 1)
+- */
+-void nfs_pageio_stop_mirroring(struct nfs_pageio_descriptor *pgio)
+-{
+-	pgio->pg_mirror_count = 1;
+-	pgio->pg_mirror_idx = 0;
+-}
+-
+ static void nfs_pageio_cleanup_mirroring(struct nfs_pageio_descriptor *pgio)
+ {
+ 	pgio->pg_mirror_count = 1;
+@@ -1320,6 +1311,14 @@ void nfs_pageio_cond_complete(struct nfs_pageio_descriptor *desc, pgoff_t index)
+ 	}
+ }
+ 
++/*
++ * nfs_pageio_stop_mirroring - stop using mirroring (set mirror count to 1)
++ */
++void nfs_pageio_stop_mirroring(struct nfs_pageio_descriptor *pgio)
++{
++	nfs_pageio_complete(pgio);
++}
++
+ int __init nfs_init_nfspagecache(void)
+ {
+ 	nfs_page_cachep = kmem_cache_create("nfs_page",
+diff --git a/fs/xfs/libxfs/xfs_alloc.c b/fs/xfs/libxfs/xfs_alloc.c
+index d8053bc96c4d..5a130409f173 100644
+--- a/fs/xfs/libxfs/xfs_alloc.c
++++ b/fs/xfs/libxfs/xfs_alloc.c
+@@ -1515,7 +1515,7 @@ xfs_alloc_ag_vextent_lastblock(
+ 	 * maxlen, go to the start of this block, and skip all those smaller
+ 	 * than minlen.
+ 	 */
+-	if (len || args->alignment > 1) {
++	if (*len || args->alignment > 1) {
+ 		acur->cnt->bc_ptrs[0] = 1;
+ 		do {
+ 			error = xfs_alloc_get_rec(acur->cnt, bno, len, &i);
+diff --git a/fs/xfs/xfs_attr_inactive.c b/fs/xfs/xfs_attr_inactive.c
+index bbfa6ba84dcd..fe8f60b59ec4 100644
+--- a/fs/xfs/xfs_attr_inactive.c
++++ b/fs/xfs/xfs_attr_inactive.c
+@@ -145,8 +145,8 @@ xfs_attr3_node_inactive(
+ 	 * Since this code is recursive (gasp!) we must protect ourselves.
+ 	 */
+ 	if (level > XFS_DA_NODE_MAXDEPTH) {
+-		xfs_trans_brelse(*trans, bp);	/* no locks for later trans */
+ 		xfs_buf_corruption_error(bp);
++		xfs_trans_brelse(*trans, bp);	/* no locks for later trans */
+ 		return -EFSCORRUPTED;
+ 	}
+ 
+diff --git a/fs/xfs/xfs_dir2_readdir.c b/fs/xfs/xfs_dir2_readdir.c
+index 0d3b640cf1cc..871ec22c9aee 100644
+--- a/fs/xfs/xfs_dir2_readdir.c
++++ b/fs/xfs/xfs_dir2_readdir.c
+@@ -147,7 +147,7 @@ xfs_dir2_block_getdents(
+ 	xfs_off_t		cook;
+ 	struct xfs_da_geometry	*geo = args->geo;
+ 	int			lock_mode;
+-	unsigned int		offset;
++	unsigned int		offset, next_offset;
+ 	unsigned int		end;
+ 
+ 	/*
+@@ -173,9 +173,10 @@ xfs_dir2_block_getdents(
+ 	 * Loop over the data portion of the block.
+ 	 * Each object is a real entry (dep) or an unused one (dup).
+ 	 */
+-	offset = geo->data_entry_offset;
+ 	end = xfs_dir3_data_end_offset(geo, bp->b_addr);
+-	while (offset < end) {
++	for (offset = geo->data_entry_offset;
++	     offset < end;
++	     offset = next_offset) {
+ 		struct xfs_dir2_data_unused	*dup = bp->b_addr + offset;
+ 		struct xfs_dir2_data_entry	*dep = bp->b_addr + offset;
+ 		uint8_t filetype;
+@@ -184,14 +185,15 @@ xfs_dir2_block_getdents(
+ 		 * Unused, skip it.
+ 		 */
+ 		if (be16_to_cpu(dup->freetag) == XFS_DIR2_DATA_FREE_TAG) {
+-			offset += be16_to_cpu(dup->length);
++			next_offset = offset + be16_to_cpu(dup->length);
+ 			continue;
+ 		}
+ 
+ 		/*
+ 		 * Bump pointer for the next iteration.
+ 		 */
+-		offset += xfs_dir2_data_entsize(dp->i_mount, dep->namelen);
++		next_offset = offset +
++			xfs_dir2_data_entsize(dp->i_mount, dep->namelen);
+ 
+ 		/*
+ 		 * The entry is before the desired starting point, skip it.
+diff --git a/fs/xfs/xfs_log.c b/fs/xfs/xfs_log.c
+index f6006d94a581..796ff37d5bb5 100644
+--- a/fs/xfs/xfs_log.c
++++ b/fs/xfs/xfs_log.c
+@@ -605,18 +605,23 @@ xfs_log_release_iclog(
+ 	struct xlog		*log = mp->m_log;
+ 	bool			sync;
+ 
+-	if (iclog->ic_state == XLOG_STATE_IOERROR) {
+-		xfs_force_shutdown(mp, SHUTDOWN_LOG_IO_ERROR);
+-		return -EIO;
+-	}
++	if (iclog->ic_state == XLOG_STATE_IOERROR)
++		goto error;
+ 
+ 	if (atomic_dec_and_lock(&iclog->ic_refcnt, &log->l_icloglock)) {
++		if (iclog->ic_state == XLOG_STATE_IOERROR) {
++			spin_unlock(&log->l_icloglock);
++			goto error;
++		}
+ 		sync = __xlog_state_release_iclog(log, iclog);
+ 		spin_unlock(&log->l_icloglock);
+ 		if (sync)
+ 			xlog_sync(log, iclog);
+ 	}
+ 	return 0;
++error:
++	xfs_force_shutdown(mp, SHUTDOWN_LOG_IO_ERROR);
++	return -EIO;
+ }
+ 
+ /*
+diff --git a/include/acpi/processor.h b/include/acpi/processor.h
+index 47805172e73d..683e124ad517 100644
+--- a/include/acpi/processor.h
++++ b/include/acpi/processor.h
+@@ -297,6 +297,14 @@ static inline void acpi_processor_ffh_cstate_enter(struct acpi_processor_cx
+ }
+ #endif
+ 
++static inline int call_on_cpu(int cpu, long (*fn)(void *), void *arg,
++			      bool direct)
++{
++	if (direct || (is_percpu_thread() && cpu == smp_processor_id()))
++		return fn(arg);
++	return work_on_cpu(cpu, fn, arg);
++}
++
+ /* in processor_perflib.c */
+ 
+ #ifdef CONFIG_CPU_FREQ
+diff --git a/include/asm-generic/mshyperv.h b/include/asm-generic/mshyperv.h
+index b3f1082cc435..1c4fd950f091 100644
+--- a/include/asm-generic/mshyperv.h
++++ b/include/asm-generic/mshyperv.h
+@@ -163,7 +163,7 @@ static inline int cpumask_to_vpset(struct hv_vpset *vpset,
+ 	return nr_bank;
+ }
+ 
+-void hyperv_report_panic(struct pt_regs *regs, long err);
++void hyperv_report_panic(struct pt_regs *regs, long err, bool in_die);
+ void hyperv_report_panic_msg(phys_addr_t pa, size_t size);
+ bool hv_is_hyperv_initialized(void);
+ bool hv_is_hibernation_supported(void);
+diff --git a/include/keys/big_key-type.h b/include/keys/big_key-type.h
+index f6a7ba4dccd4..3fee04f81439 100644
+--- a/include/keys/big_key-type.h
++++ b/include/keys/big_key-type.h
+@@ -17,6 +17,6 @@ extern void big_key_free_preparse(struct key_preparsed_payload *prep);
+ extern void big_key_revoke(struct key *key);
+ extern void big_key_destroy(struct key *key);
+ extern void big_key_describe(const struct key *big_key, struct seq_file *m);
+-extern long big_key_read(const struct key *key, char __user *buffer, size_t buflen);
++extern long big_key_read(const struct key *key, char *buffer, size_t buflen);
+ 
+ #endif /* _KEYS_BIG_KEY_TYPE_H */
+diff --git a/include/keys/user-type.h b/include/keys/user-type.h
+index d5e73266a81a..be61fcddc02a 100644
+--- a/include/keys/user-type.h
++++ b/include/keys/user-type.h
+@@ -41,8 +41,7 @@ extern int user_update(struct key *key, struct key_preparsed_payload *prep);
+ extern void user_revoke(struct key *key);
+ extern void user_destroy(struct key *key);
+ extern void user_describe(const struct key *user, struct seq_file *m);
+-extern long user_read(const struct key *key,
+-		      char __user *buffer, size_t buflen);
++extern long user_read(const struct key *key, char *buffer, size_t buflen);
+ 
+ static inline const struct user_key_payload *user_key_payload_rcu(const struct key *key)
+ {
+diff --git a/include/linux/buffer_head.h b/include/linux/buffer_head.h
+index 7b73ef7f902d..b56cc825f64d 100644
+--- a/include/linux/buffer_head.h
++++ b/include/linux/buffer_head.h
+@@ -189,6 +189,8 @@ struct buffer_head *__getblk_gfp(struct block_device *bdev, sector_t block,
+ void __brelse(struct buffer_head *);
+ void __bforget(struct buffer_head *);
+ void __breadahead(struct block_device *, sector_t block, unsigned int size);
++void __breadahead_gfp(struct block_device *, sector_t block, unsigned int size,
++		  gfp_t gfp);
+ struct buffer_head *__bread_gfp(struct block_device *,
+ 				sector_t block, unsigned size, gfp_t gfp);
+ void invalidate_bh_lrus(void);
+@@ -319,6 +321,12 @@ sb_breadahead(struct super_block *sb, sector_t block)
+ 	__breadahead(sb->s_bdev, block, sb->s_blocksize);
+ }
+ 
++static inline void
++sb_breadahead_unmovable(struct super_block *sb, sector_t block)
++{
++	__breadahead_gfp(sb->s_bdev, block, sb->s_blocksize, 0);
++}
++
+ static inline struct buffer_head *
+ sb_getblk(struct super_block *sb, sector_t block)
+ {
+diff --git a/include/linux/compiler.h b/include/linux/compiler.h
+index 5e88e7e33abe..034b0a644efc 100644
+--- a/include/linux/compiler.h
++++ b/include/linux/compiler.h
+@@ -347,7 +347,7 @@ static inline void *offset_to_ptr(const int *off)
+  * compiler has support to do so.
+  */
+ #define compiletime_assert(condition, msg) \
+-	_compiletime_assert(condition, msg, __compiletime_assert_, __LINE__)
++	_compiletime_assert(condition, msg, __compiletime_assert_, __COUNTER__)
+ 
+ #define compiletime_assert_atomic_type(t)				\
+ 	compiletime_assert(__native_word(t),				\
+diff --git a/include/linux/f2fs_fs.h b/include/linux/f2fs_fs.h
+index ac3f4888b3df..3c383ddd92dd 100644
+--- a/include/linux/f2fs_fs.h
++++ b/include/linux/f2fs_fs.h
+@@ -125,6 +125,7 @@ struct f2fs_super_block {
+ /*
+  * For checkpoint
+  */
++#define CP_RESIZEFS_FLAG		0x00004000
+ #define CP_DISABLED_QUICK_FLAG		0x00002000
+ #define CP_DISABLED_FLAG		0x00001000
+ #define CP_QUOTA_NEED_FSCK_FLAG		0x00000800
+diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
+index 1e897e4168ac..dafb3d70ff81 100644
+--- a/include/linux/hugetlb.h
++++ b/include/linux/hugetlb.h
+@@ -390,7 +390,10 @@ static inline bool is_file_hugepages(struct file *file)
+ 	return is_file_shm_hugepages(file);
+ }
+ 
+-
++static inline struct hstate *hstate_inode(struct inode *i)
++{
++	return HUGETLBFS_SB(i->i_sb)->hstate;
++}
+ #else /* !CONFIG_HUGETLBFS */
+ 
+ #define is_file_hugepages(file)			false
+@@ -402,6 +405,10 @@ hugetlb_file_setup(const char *name, size_t size, vm_flags_t acctflag,
+ 	return ERR_PTR(-ENOSYS);
+ }
+ 
++static inline struct hstate *hstate_inode(struct inode *i)
++{
++	return NULL;
++}
+ #endif /* !CONFIG_HUGETLBFS */
+ 
+ #ifdef HAVE_ARCH_HUGETLB_UNMAPPED_AREA
+@@ -472,11 +479,6 @@ extern unsigned int default_hstate_idx;
+ 
+ #define default_hstate (hstates[default_hstate_idx])
+ 
+-static inline struct hstate *hstate_inode(struct inode *i)
+-{
+-	return HUGETLBFS_SB(i->i_sb)->hstate;
+-}
+-
+ static inline struct hstate *hstate_file(struct file *f)
+ {
+ 	return hstate_inode(file_inode(f));
+@@ -729,11 +731,6 @@ static inline struct hstate *hstate_vma(struct vm_area_struct *vma)
+ 	return NULL;
+ }
+ 
+-static inline struct hstate *hstate_inode(struct inode *i)
+-{
+-	return NULL;
+-}
+-
+ static inline struct hstate *page_hstate(struct page *page)
+ {
+ 	return NULL;
+diff --git a/include/linux/key-type.h b/include/linux/key-type.h
+index 4ded94bcf274..2ab2d6d6aeab 100644
+--- a/include/linux/key-type.h
++++ b/include/linux/key-type.h
+@@ -127,7 +127,7 @@ struct key_type {
+ 	 *   much is copied into the buffer
+ 	 * - shouldn't do the copy if the buffer is NULL
+ 	 */
+-	long (*read)(const struct key *key, char __user *buffer, size_t buflen);
++	long (*read)(const struct key *key, char *buffer, size_t buflen);
+ 
+ 	/* handle request_key() for this type instead of invoking
+ 	 * /sbin/request-key (optional)
+diff --git a/include/linux/percpu_counter.h b/include/linux/percpu_counter.h
+index 4f052496cdfd..0a4f54dd4737 100644
+--- a/include/linux/percpu_counter.h
++++ b/include/linux/percpu_counter.h
+@@ -78,9 +78,9 @@ static inline s64 percpu_counter_read(struct percpu_counter *fbc)
+  */
+ static inline s64 percpu_counter_read_positive(struct percpu_counter *fbc)
+ {
+-	s64 ret = fbc->count;
++	/* Prevent reloads of fbc->count */
++	s64 ret = READ_ONCE(fbc->count);
+ 
+-	barrier();		/* Prevent reloads of fbc->count */
+ 	if (ret >= 0)
+ 		return ret;
+ 	return 0;
+diff --git a/include/linux/platform_data/cros_ec_proto.h b/include/linux/platform_data/cros_ec_proto.h
+index ba5914770191..383243326676 100644
+--- a/include/linux/platform_data/cros_ec_proto.h
++++ b/include/linux/platform_data/cros_ec_proto.h
+@@ -125,6 +125,9 @@ struct cros_ec_command {
+  * @host_event_wake_mask: Mask of host events that cause wake from suspend.
+  * @last_event_time: exact time from the hard irq when we got notified of
+  *     a new event.
++ * @notifier_ready: The notifier_block to let the kernel re-query EC
++ *		    communication protocol when the EC sends
++ *		    EC_HOST_EVENT_INTERFACE_READY.
+  * @ec: The platform_device used by the mfd driver to interface with the
+  *      main EC.
+  * @pd: The platform_device used by the mfd driver to interface with the
+@@ -166,6 +169,7 @@ struct cros_ec_device {
+ 	u32 host_event_wake_mask;
+ 	u32 last_resume_result;
+ 	ktime_t last_event_time;
++	struct notifier_block notifier_ready;
+ 
+ 	/* The platform devices used by the mfd driver */
+ 	struct platform_device *ec;
+diff --git a/include/linux/swapops.h b/include/linux/swapops.h
+index 877fd239b6ff..3208a520d0be 100644
+--- a/include/linux/swapops.h
++++ b/include/linux/swapops.h
+@@ -348,7 +348,8 @@ static inline void num_poisoned_pages_inc(void)
+ }
+ #endif
+ 
+-#if defined(CONFIG_MEMORY_FAILURE) || defined(CONFIG_MIGRATION)
++#if defined(CONFIG_MEMORY_FAILURE) || defined(CONFIG_MIGRATION) || \
++    defined(CONFIG_DEVICE_PRIVATE)
+ static inline int non_swap_entry(swp_entry_t entry)
+ {
+ 	return swp_type(entry) >= MAX_SWAPFILES;
+diff --git a/include/trace/bpf_probe.h b/include/trace/bpf_probe.h
+index b04c29270973..1ce3be63add1 100644
+--- a/include/trace/bpf_probe.h
++++ b/include/trace/bpf_probe.h
+@@ -75,13 +75,17 @@ static inline void bpf_test_probe_##call(void)				\
+ 	check_trace_callback_type_##call(__bpf_trace_##template);	\
+ }									\
+ typedef void (*btf_trace_##call)(void *__data, proto);			\
+-static struct bpf_raw_event_map	__used					\
+-	__attribute__((section("__bpf_raw_tp_map")))			\
+-__bpf_trace_tp_map_##call = {						\
+-	.tp		= &__tracepoint_##call,				\
+-	.bpf_func	= (void *)(btf_trace_##call)__bpf_trace_##template,	\
+-	.num_args	= COUNT_ARGS(args),				\
+-	.writable_size	= size,						\
++static union {								\
++	struct bpf_raw_event_map event;					\
++	btf_trace_##call handler;					\
++} __bpf_trace_tp_map_##call __used					\
++__attribute__((section("__bpf_raw_tp_map"))) = {			\
++	.event = {							\
++		.tp		= &__tracepoint_##call,			\
++		.bpf_func	= __bpf_trace_##template,		\
++		.num_args	= COUNT_ARGS(args),			\
++		.writable_size	= size,					\
++	},								\
+ };
+ 
+ #define FIRST(x, ...) x
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index 966b7b34cde0..3b92aea18ae7 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -592,9 +592,7 @@ static void bpf_map_mmap_open(struct vm_area_struct *vma)
+ {
+ 	struct bpf_map *map = vma->vm_file->private_data;
+ 
+-	bpf_map_inc_with_uref(map);
+-
+-	if (vma->vm_flags & VM_WRITE) {
++	if (vma->vm_flags & VM_MAYWRITE) {
+ 		mutex_lock(&map->freeze_mutex);
+ 		map->writecnt++;
+ 		mutex_unlock(&map->freeze_mutex);
+@@ -606,13 +604,11 @@ static void bpf_map_mmap_close(struct vm_area_struct *vma)
+ {
+ 	struct bpf_map *map = vma->vm_file->private_data;
+ 
+-	if (vma->vm_flags & VM_WRITE) {
++	if (vma->vm_flags & VM_MAYWRITE) {
+ 		mutex_lock(&map->freeze_mutex);
+ 		map->writecnt--;
+ 		mutex_unlock(&map->freeze_mutex);
+ 	}
+-
+-	bpf_map_put_with_uref(map);
+ }
+ 
+ static const struct vm_operations_struct bpf_map_default_vmops = {
+@@ -641,14 +637,16 @@ static int bpf_map_mmap(struct file *filp, struct vm_area_struct *vma)
+ 	/* set default open/close callbacks */
+ 	vma->vm_ops = &bpf_map_default_vmops;
+ 	vma->vm_private_data = map;
++	vma->vm_flags &= ~VM_MAYEXEC;
++	if (!(vma->vm_flags & VM_WRITE))
++		/* disallow re-mapping with PROT_WRITE */
++		vma->vm_flags &= ~VM_MAYWRITE;
+ 
+ 	err = map->ops->map_mmap(map, vma);
+ 	if (err)
+ 		goto out;
+ 
+-	bpf_map_inc_with_uref(map);
+-
+-	if (vma->vm_flags & VM_WRITE)
++	if (vma->vm_flags & VM_MAYWRITE)
+ 		map->writecnt++;
+ out:
+ 	mutex_unlock(&map->freeze_mutex);
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 595b39eee642..e5d12c54b552 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -227,8 +227,7 @@ struct bpf_call_arg_meta {
+ 	bool pkt_access;
+ 	int regno;
+ 	int access_size;
+-	s64 msize_smax_value;
+-	u64 msize_umax_value;
++	u64 msize_max_value;
+ 	int ref_obj_id;
+ 	int func_id;
+ 	u32 btf_id;
+@@ -3568,8 +3567,7 @@ static int check_func_arg(struct bpf_verifier_env *env, u32 regno,
+ 		/* remember the mem_size which may be used later
+ 		 * to refine return values.
+ 		 */
+-		meta->msize_smax_value = reg->smax_value;
+-		meta->msize_umax_value = reg->umax_value;
++		meta->msize_max_value = reg->umax_value;
+ 
+ 		/* The register is SCALAR_VALUE; the access check
+ 		 * happens using its boundaries.
+@@ -4095,21 +4093,44 @@ static int prepare_func_exit(struct bpf_verifier_env *env, int *insn_idx)
+ 	return 0;
+ }
+ 
+-static void do_refine_retval_range(struct bpf_reg_state *regs, int ret_type,
+-				   int func_id,
+-				   struct bpf_call_arg_meta *meta)
++static int do_refine_retval_range(struct bpf_verifier_env *env,
++				  struct bpf_reg_state *regs, int ret_type,
++				  int func_id, struct bpf_call_arg_meta *meta)
+ {
+ 	struct bpf_reg_state *ret_reg = &regs[BPF_REG_0];
++	struct bpf_reg_state tmp_reg = *ret_reg;
++	bool ret;
+ 
+ 	if (ret_type != RET_INTEGER ||
+ 	    (func_id != BPF_FUNC_get_stack &&
+ 	     func_id != BPF_FUNC_probe_read_str))
+-		return;
++		return 0;
++
++	/* Error case where ret is in interval [S32MIN, -1]. */
++	ret_reg->smin_value = S32_MIN;
++	ret_reg->smax_value = -1;
+ 
+-	ret_reg->smax_value = meta->msize_smax_value;
+-	ret_reg->umax_value = meta->msize_umax_value;
+ 	__reg_deduce_bounds(ret_reg);
+ 	__reg_bound_offset(ret_reg);
++	__update_reg_bounds(ret_reg);
++
++	ret = push_stack(env, env->insn_idx + 1, env->insn_idx, false);
++	if (!ret)
++		return -EFAULT;
++
++	*ret_reg = tmp_reg;
++
++	/* Success case where ret is in range [0, msize_max_value]. */
++	ret_reg->smin_value = 0;
++	ret_reg->smax_value = meta->msize_max_value;
++	ret_reg->umin_value = ret_reg->smin_value;
++	ret_reg->umax_value = ret_reg->smax_value;
++
++	__reg_deduce_bounds(ret_reg);
++	__reg_bound_offset(ret_reg);
++	__update_reg_bounds(ret_reg);
++
++	return 0;
+ }
+ 
+ static int
+@@ -4377,7 +4398,9 @@ static int check_helper_call(struct bpf_verifier_env *env, int func_id, int insn
+ 		regs[BPF_REG_0].ref_obj_id = id;
+ 	}
+ 
+-	do_refine_retval_range(regs, fn->ret_type, func_id, &meta);
++	err = do_refine_retval_range(env, regs, fn->ret_type, func_id, &meta);
++	if (err)
++		return err;
+ 
+ 	err = check_map_func_compatibility(env, meta.map_ptr, func_id);
+ 	if (err)
+diff --git a/kernel/dma/coherent.c b/kernel/dma/coherent.c
+index 551b0eb7028a..2a0c4985f38e 100644
+--- a/kernel/dma/coherent.c
++++ b/kernel/dma/coherent.c
+@@ -134,7 +134,7 @@ static void *__dma_alloc_from_coherent(struct device *dev,
+ 
+ 	spin_lock_irqsave(&mem->spinlock, flags);
+ 
+-	if (unlikely(size > (mem->size << PAGE_SHIFT)))
++	if (unlikely(size > ((dma_addr_t)mem->size << PAGE_SHIFT)))
+ 		goto err;
+ 
+ 	pageno = bitmap_find_free_region(mem->bitmap, mem->size, order);
+@@ -144,8 +144,9 @@ static void *__dma_alloc_from_coherent(struct device *dev,
+ 	/*
+ 	 * Memory was found in the coherent area.
+ 	 */
+-	*dma_handle = dma_get_device_base(dev, mem) + (pageno << PAGE_SHIFT);
+-	ret = mem->virt_base + (pageno << PAGE_SHIFT);
++	*dma_handle = dma_get_device_base(dev, mem) +
++			((dma_addr_t)pageno << PAGE_SHIFT);
++	ret = mem->virt_base + ((dma_addr_t)pageno << PAGE_SHIFT);
+ 	spin_unlock_irqrestore(&mem->spinlock, flags);
+ 	memset(ret, 0, size);
+ 	return ret;
+@@ -194,7 +195,7 @@ static int __dma_release_from_coherent(struct dma_coherent_mem *mem,
+ 				       int order, void *vaddr)
+ {
+ 	if (mem && vaddr >= mem->virt_base && vaddr <
+-		   (mem->virt_base + (mem->size << PAGE_SHIFT))) {
++		   (mem->virt_base + ((dma_addr_t)mem->size << PAGE_SHIFT))) {
+ 		int page = (vaddr - mem->virt_base) >> PAGE_SHIFT;
+ 		unsigned long flags;
+ 
+@@ -238,10 +239,10 @@ static int __dma_mmap_from_coherent(struct dma_coherent_mem *mem,
+ 		struct vm_area_struct *vma, void *vaddr, size_t size, int *ret)
+ {
+ 	if (mem && vaddr >= mem->virt_base && vaddr + size <=
+-		   (mem->virt_base + (mem->size << PAGE_SHIFT))) {
++		   (mem->virt_base + ((dma_addr_t)mem->size << PAGE_SHIFT))) {
+ 		unsigned long off = vma->vm_pgoff;
+ 		int start = (vaddr - mem->virt_base) >> PAGE_SHIFT;
+-		int user_count = vma_pages(vma);
++		unsigned long user_count = vma_pages(vma);
+ 		int count = PAGE_ALIGN(size) >> PAGE_SHIFT;
+ 
+ 		*ret = -ENXIO;
+diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c
+index 2031ed1ad7fa..9e1777c81f55 100644
+--- a/kernel/dma/debug.c
++++ b/kernel/dma/debug.c
+@@ -137,9 +137,12 @@ static const char *const maperr2str[] = {
+ 	[MAP_ERR_CHECKED] = "dma map error checked",
+ };
+ 
+-static const char *type2name[5] = { "single", "page",
+-				    "scather-gather", "coherent",
+-				    "resource" };
++static const char *type2name[] = {
++	[dma_debug_single] = "single",
++	[dma_debug_sg] = "scather-gather",
++	[dma_debug_coherent] = "coherent",
++	[dma_debug_resource] = "resource",
++};
+ 
+ static const char *dir2name[4] = { "DMA_BIDIRECTIONAL", "DMA_TO_DEVICE",
+ 				   "DMA_FROM_DEVICE", "DMA_NONE" };
+diff --git a/kernel/locking/locktorture.c b/kernel/locking/locktorture.c
+index 99475a66c94f..687c1d83dc20 100644
+--- a/kernel/locking/locktorture.c
++++ b/kernel/locking/locktorture.c
+@@ -696,10 +696,10 @@ static void __torture_print_stats(char *page,
+ 		if (statp[i].n_lock_fail)
+ 			fail = true;
+ 		sum += statp[i].n_lock_acquired;
+-		if (max < statp[i].n_lock_fail)
+-			max = statp[i].n_lock_fail;
+-		if (min > statp[i].n_lock_fail)
+-			min = statp[i].n_lock_fail;
++		if (max < statp[i].n_lock_acquired)
++			max = statp[i].n_lock_acquired;
++		if (min > statp[i].n_lock_acquired)
++			min = statp[i].n_lock_acquired;
+ 	}
+ 	page += sprintf(page,
+ 			"%s:  Total: %lld  Max/Min: %ld/%ld %s  Fail: %d %s\n",
+diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
+index 69def4a9df00..ab9af2e052ca 100644
+--- a/lib/Kconfig.debug
++++ b/lib/Kconfig.debug
+@@ -241,6 +241,8 @@ config DEBUG_INFO_DWARF4
+ config DEBUG_INFO_BTF
+ 	bool "Generate BTF typeinfo"
+ 	depends on DEBUG_INFO
++	depends on !DEBUG_INFO_SPLIT && !DEBUG_INFO_REDUCED
++	depends on !GCC_PLUGIN_RANDSTRUCT || COMPILE_TEST
+ 	help
+ 	  Generate deduplicated BTF type information from DWARF debug info.
+ 	  Turning this on expects presence of pahole tool, which will convert
+diff --git a/net/dns_resolver/dns_key.c b/net/dns_resolver/dns_key.c
+index 3e1a90669006..ad53eb31d40f 100644
+--- a/net/dns_resolver/dns_key.c
++++ b/net/dns_resolver/dns_key.c
+@@ -302,7 +302,7 @@ static void dns_resolver_describe(const struct key *key, struct seq_file *m)
+  * - the key's semaphore is read-locked
+  */
+ static long dns_resolver_read(const struct key *key,
+-			      char __user *buffer, size_t buflen)
++			      char *buffer, size_t buflen)
+ {
+ 	int err = PTR_ERR(key->payload.data[dns_key_error]);
+ 
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index d11f1a74d43c..68ec31c4ae65 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -3950,7 +3950,7 @@ static int nf_tables_newset(struct net *net, struct sock *nlsk,
+ 			      NFT_SET_INTERVAL | NFT_SET_TIMEOUT |
+ 			      NFT_SET_MAP | NFT_SET_EVAL |
+ 			      NFT_SET_OBJECT))
+-			return -EINVAL;
++			return -EOPNOTSUPP;
+ 		/* Only one of these operations is supported */
+ 		if ((flags & (NFT_SET_MAP | NFT_SET_OBJECT)) ==
+ 			     (NFT_SET_MAP | NFT_SET_OBJECT))
+@@ -3988,7 +3988,7 @@ static int nf_tables_newset(struct net *net, struct sock *nlsk,
+ 		objtype = ntohl(nla_get_be32(nla[NFTA_SET_OBJ_TYPE]));
+ 		if (objtype == NFT_OBJECT_UNSPEC ||
+ 		    objtype > NFT_OBJECT_MAX)
+-			return -EINVAL;
++			return -EOPNOTSUPP;
+ 	} else if (flags & NFT_SET_OBJECT)
+ 		return -EINVAL;
+ 	else
+diff --git a/net/netfilter/nft_set_rbtree.c b/net/netfilter/nft_set_rbtree.c
+index 8617fc16a1ed..46d976969ca3 100644
+--- a/net/netfilter/nft_set_rbtree.c
++++ b/net/netfilter/nft_set_rbtree.c
+@@ -218,27 +218,26 @@ static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set,
+ 
+ 	/* Detect overlaps as we descend the tree. Set the flag in these cases:
+ 	 *
+-	 * a1. |__ _ _?  >|__ _ _  (insert start after existing start)
+-	 * a2. _ _ __>|  ?_ _ __|  (insert end before existing end)
+-	 * a3. _ _ ___|  ?_ _ _>|  (insert end after existing end)
+-	 * a4. >|__ _ _   _ _ __|  (insert start before existing end)
++	 * a1. _ _ __>|  ?_ _ __|  (insert end before existing end)
++	 * a2. _ _ ___|  ?_ _ _>|  (insert end after existing end)
++	 * a3. _ _ ___? >|_ _ __|  (insert start before existing end)
+ 	 *
+ 	 * and clear it later on, as we eventually reach the points indicated by
+ 	 * '?' above, in the cases described below. We'll always meet these
+ 	 * later, locally, due to tree ordering, and overlaps for the intervals
+ 	 * that are the closest together are always evaluated last.
+ 	 *
+-	 * b1. |__ _ _!  >|__ _ _  (insert start after existing end)
+-	 * b2. _ _ __>|  !_ _ __|  (insert end before existing start)
+-	 * b3. !_____>|            (insert end after existing start)
++	 * b1. _ _ __>|  !_ _ __|  (insert end before existing start)
++	 * b2. _ _ ___|  !_ _ _>|  (insert end after existing start)
++	 * b3. _ _ ___! >|_ _ __|  (insert start after existing end)
+ 	 *
+-	 * Case a4. resolves to b1.:
++	 * Case a3. resolves to b3.:
+ 	 * - if the inserted start element is the leftmost, because the '0'
+ 	 *   element in the tree serves as end element
+ 	 * - otherwise, if an existing end is found. Note that end elements are
+ 	 *   always inserted after corresponding start elements.
+ 	 *
+-	 * For a new, rightmost pair of elements, we'll hit cases b1. and b3.,
++	 * For a new, rightmost pair of elements, we'll hit cases b3. and b2.,
+ 	 * in that order.
+ 	 *
+ 	 * The flag is also cleared in two special cases:
+@@ -262,9 +261,9 @@ static int __nft_rbtree_insert(const struct net *net, const struct nft_set *set,
+ 			p = &parent->rb_left;
+ 
+ 			if (nft_rbtree_interval_start(new)) {
+-				overlap = nft_rbtree_interval_start(rbe) &&
+-					  nft_set_elem_active(&rbe->ext,
+-							      genmask);
++				if (nft_rbtree_interval_end(rbe) &&
++				    nft_set_elem_active(&rbe->ext, genmask))
++					overlap = false;
+ 			} else {
+ 				overlap = nft_rbtree_interval_end(rbe) &&
+ 					  nft_set_elem_active(&rbe->ext,
+diff --git a/net/rxrpc/key.c b/net/rxrpc/key.c
+index 6c3f35fac42d..0c98313dd7a8 100644
+--- a/net/rxrpc/key.c
++++ b/net/rxrpc/key.c
+@@ -31,7 +31,7 @@ static void rxrpc_free_preparse_s(struct key_preparsed_payload *);
+ static void rxrpc_destroy(struct key *);
+ static void rxrpc_destroy_s(struct key *);
+ static void rxrpc_describe(const struct key *, struct seq_file *);
+-static long rxrpc_read(const struct key *, char __user *, size_t);
++static long rxrpc_read(const struct key *, char *, size_t);
+ 
+ /*
+  * rxrpc defined keys take an arbitrary string as the description and an
+@@ -1042,12 +1042,12 @@ EXPORT_SYMBOL(rxrpc_get_null_key);
+  * - this returns the result in XDR form
+  */
+ static long rxrpc_read(const struct key *key,
+-		       char __user *buffer, size_t buflen)
++		       char *buffer, size_t buflen)
+ {
+ 	const struct rxrpc_key_token *token;
+ 	const struct krb5_principal *princ;
+ 	size_t size;
+-	__be32 __user *xdr, *oldxdr;
++	__be32 *xdr, *oldxdr;
+ 	u32 cnlen, toksize, ntoks, tok, zero;
+ 	u16 toksizes[AFSTOKEN_MAX];
+ 	int loop;
+@@ -1124,30 +1124,25 @@ static long rxrpc_read(const struct key *key,
+ 	if (!buffer || buflen < size)
+ 		return size;
+ 
+-	xdr = (__be32 __user *) buffer;
++	xdr = (__be32 *)buffer;
+ 	zero = 0;
+ #define ENCODE(x)				\
+ 	do {					\
+-		__be32 y = htonl(x);		\
+-		if (put_user(y, xdr++) < 0)	\
+-			goto fault;		\
++		*xdr++ = htonl(x);		\
+ 	} while(0)
+ #define ENCODE_DATA(l, s)						\
+ 	do {								\
+ 		u32 _l = (l);						\
+ 		ENCODE(l);						\
+-		if (copy_to_user(xdr, (s), _l) != 0)			\
+-			goto fault;					\
+-		if (_l & 3 &&						\
+-		    copy_to_user((u8 __user *)xdr + _l, &zero, 4 - (_l & 3)) != 0) \
+-			goto fault;					\
++		memcpy(xdr, (s), _l);					\
++		if (_l & 3)						\
++			memcpy((u8 *)xdr + _l, &zero, 4 - (_l & 3));	\
+ 		xdr += (_l + 3) >> 2;					\
+ 	} while(0)
+ #define ENCODE64(x)					\
+ 	do {						\
+ 		__be64 y = cpu_to_be64(x);		\
+-		if (copy_to_user(xdr, &y, 8) != 0)	\
+-			goto fault;			\
++		memcpy(xdr, &y, 8);			\
+ 		xdr += 8 >> 2;				\
+ 	} while(0)
+ #define ENCODE_STR(s)				\
+@@ -1238,8 +1233,4 @@ static long rxrpc_read(const struct key *key,
+ 	ASSERTCMP((char __user *) xdr - buffer, ==, size);
+ 	_leave(" = %zu", size);
+ 	return size;
+-
+-fault:
+-	_leave(" = -EFAULT");
+-	return -EFAULT;
+ }
+diff --git a/net/sunrpc/auth_gss/auth_gss.c b/net/sunrpc/auth_gss/auth_gss.c
+index 24ca861815b1..2dc740acb3bf 100644
+--- a/net/sunrpc/auth_gss/auth_gss.c
++++ b/net/sunrpc/auth_gss/auth_gss.c
+@@ -20,6 +20,7 @@
+ #include <linux/sunrpc/clnt.h>
+ #include <linux/sunrpc/auth.h>
+ #include <linux/sunrpc/auth_gss.h>
++#include <linux/sunrpc/gss_krb5.h>
+ #include <linux/sunrpc/svcauth_gss.h>
+ #include <linux/sunrpc/gss_err.h>
+ #include <linux/workqueue.h>
+@@ -1050,7 +1051,7 @@ gss_create_new(const struct rpc_auth_create_args *args, struct rpc_clnt *clnt)
+ 		goto err_put_mech;
+ 	auth = &gss_auth->rpc_auth;
+ 	auth->au_cslack = GSS_CRED_SLACK >> 2;
+-	auth->au_rslack = GSS_VERF_SLACK >> 2;
++	auth->au_rslack = GSS_KRB5_MAX_SLACK_NEEDED >> 2;
+ 	auth->au_verfsize = GSS_VERF_SLACK >> 2;
+ 	auth->au_ralign = GSS_VERF_SLACK >> 2;
+ 	auth->au_flags = 0;
+@@ -1934,35 +1935,69 @@ gss_unwrap_resp_auth(struct rpc_cred *cred)
+ 	return 0;
+ }
+ 
++/*
++ * RFC 2203, Section 5.3.2.2
++ *
++ *	struct rpc_gss_integ_data {
++ *		opaque databody_integ<>;
++ *		opaque checksum<>;
++ *	};
++ *
++ *	struct rpc_gss_data_t {
++ *		unsigned int seq_num;
++ *		proc_req_arg_t arg;
++ *	};
++ */
+ static int
+ gss_unwrap_resp_integ(struct rpc_task *task, struct rpc_cred *cred,
+ 		      struct gss_cl_ctx *ctx, struct rpc_rqst *rqstp,
+ 		      struct xdr_stream *xdr)
+ {
+-	struct xdr_buf integ_buf, *rcv_buf = &rqstp->rq_rcv_buf;
+-	u32 data_offset, mic_offset, integ_len, maj_stat;
++	struct xdr_buf gss_data, *rcv_buf = &rqstp->rq_rcv_buf;
+ 	struct rpc_auth *auth = cred->cr_auth;
++	u32 len, offset, seqno, maj_stat;
+ 	struct xdr_netobj mic;
+-	__be32 *p;
++	int ret;
+ 
+-	p = xdr_inline_decode(xdr, 2 * sizeof(*p));
+-	if (unlikely(!p))
++	ret = -EIO;
++	mic.data = NULL;
++
++	/* opaque databody_integ<>; */
++	if (xdr_stream_decode_u32(xdr, &len))
+ 		goto unwrap_failed;
+-	integ_len = be32_to_cpup(p++);
+-	if (integ_len & 3)
++	if (len & 3)
+ 		goto unwrap_failed;
+-	data_offset = (u8 *)(p) - (u8 *)rcv_buf->head[0].iov_base;
+-	mic_offset = integ_len + data_offset;
+-	if (mic_offset > rcv_buf->len)
++	offset = rcv_buf->len - xdr_stream_remaining(xdr);
++	if (xdr_stream_decode_u32(xdr, &seqno))
+ 		goto unwrap_failed;
+-	if (be32_to_cpup(p) != rqstp->rq_seqno)
++	if (seqno != rqstp->rq_seqno)
+ 		goto bad_seqno;
++	if (xdr_buf_subsegment(rcv_buf, &gss_data, offset, len))
++		goto unwrap_failed;
+ 
+-	if (xdr_buf_subsegment(rcv_buf, &integ_buf, data_offset, integ_len))
++	/*
++	 * The xdr_stream now points to the beginning of the
++	 * upper layer payload, to be passed below to
++	 * rpcauth_unwrap_resp_decode(). The checksum, which
++	 * follows the upper layer payload in @rcv_buf, is
++	 * located and parsed without updating the xdr_stream.
++	 */
++
++	/* opaque checksum<>; */
++	offset += len;
++	if (xdr_decode_word(rcv_buf, offset, &len))
++		goto unwrap_failed;
++	offset += sizeof(__be32);
++	if (offset + len > rcv_buf->len)
+ 		goto unwrap_failed;
+-	if (xdr_buf_read_mic(rcv_buf, &mic, mic_offset))
++	mic.len = len;
++	mic.data = kmalloc(len, GFP_NOFS);
++	if (!mic.data)
++		goto unwrap_failed;
++	if (read_bytes_from_xdr_buf(rcv_buf, offset, mic.data, mic.len))
+ 		goto unwrap_failed;
+-	maj_stat = gss_verify_mic(ctx->gc_gss_ctx, &integ_buf, &mic);
++
++	maj_stat = gss_verify_mic(ctx->gc_gss_ctx, &gss_data, &mic);
+ 	if (maj_stat == GSS_S_CONTEXT_EXPIRED)
+ 		clear_bit(RPCAUTH_CRED_UPTODATE, &cred->cr_flags);
+ 	if (maj_stat != GSS_S_COMPLETE)
+@@ -1970,16 +2005,21 @@ gss_unwrap_resp_integ(struct rpc_task *task, struct rpc_cred *cred,
+ 
+ 	auth->au_rslack = auth->au_verfsize + 2 + 1 + XDR_QUADLEN(mic.len);
+ 	auth->au_ralign = auth->au_verfsize + 2;
+-	return 0;
++	ret = 0;
++
++out:
++	kfree(mic.data);
++	return ret;
++
+ unwrap_failed:
+ 	trace_rpcgss_unwrap_failed(task);
+-	return -EIO;
++	goto out;
+ bad_seqno:
+-	trace_rpcgss_bad_seqno(task, rqstp->rq_seqno, be32_to_cpup(p));
+-	return -EIO;
++	trace_rpcgss_bad_seqno(task, rqstp->rq_seqno, seqno);
++	goto out;
+ bad_mic:
+ 	trace_rpcgss_verify_mic(task, maj_stat);
+-	return -EIO;
++	goto out;
+ }
+ 
+ static int
+diff --git a/net/xdp/xdp_umem.c b/net/xdp/xdp_umem.c
+index fa7bb5e060d0..ed7a6060f73c 100644
+--- a/net/xdp/xdp_umem.c
++++ b/net/xdp/xdp_umem.c
+@@ -343,7 +343,7 @@ static int xdp_umem_reg(struct xdp_umem *umem, struct xdp_umem_reg *mr)
+ 	u32 chunk_size = mr->chunk_size, headroom = mr->headroom;
+ 	unsigned int chunks, chunks_per_page;
+ 	u64 addr = mr->addr, size = mr->len;
+-	int size_chk, err;
++	int err;
+ 
+ 	if (chunk_size < XDP_UMEM_MIN_CHUNK_SIZE || chunk_size > PAGE_SIZE) {
+ 		/* Strictly speaking we could support this, if:
+@@ -382,8 +382,7 @@ static int xdp_umem_reg(struct xdp_umem *umem, struct xdp_umem_reg *mr)
+ 			return -EINVAL;
+ 	}
+ 
+-	size_chk = chunk_size - headroom - XDP_PACKET_HEADROOM;
+-	if (size_chk < 0)
++	if (headroom >= chunk_size - XDP_PACKET_HEADROOM)
+ 		return -EINVAL;
+ 
+ 	umem->address = (unsigned long)addr;
+diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c
+index 356f90e4522b..c350108aa38d 100644
+--- a/net/xdp/xsk.c
++++ b/net/xdp/xsk.c
+@@ -131,8 +131,9 @@ static void __xsk_rcv_memcpy(struct xdp_umem *umem, u64 addr, void *from_buf,
+ 		u64 page_start = addr & ~(PAGE_SIZE - 1);
+ 		u64 first_len = PAGE_SIZE - (addr - page_start);
+ 
+-		memcpy(to_buf, from_buf, first_len + metalen);
+-		memcpy(next_pg_addr, from_buf + first_len, len - first_len);
++		memcpy(to_buf, from_buf, first_len);
++		memcpy(next_pg_addr, from_buf + first_len,
++		       len + metalen - first_len);
+ 
+ 		return;
+ 	}
+diff --git a/security/keys/big_key.c b/security/keys/big_key.c
+index 001abe530a0d..82008f900930 100644
+--- a/security/keys/big_key.c
++++ b/security/keys/big_key.c
+@@ -352,7 +352,7 @@ void big_key_describe(const struct key *key, struct seq_file *m)
+  * read the key data
+  * - the key's semaphore is read-locked
+  */
+-long big_key_read(const struct key *key, char __user *buffer, size_t buflen)
++long big_key_read(const struct key *key, char *buffer, size_t buflen)
+ {
+ 	size_t datalen = (size_t)key->payload.data[big_key_len];
+ 	long ret;
+@@ -391,9 +391,8 @@ long big_key_read(const struct key *key, char __user *buffer, size_t buflen)
+ 
+ 		ret = datalen;
+ 
+-		/* copy decrypted data to user */
+-		if (copy_to_user(buffer, buf->virt, datalen) != 0)
+-			ret = -EFAULT;
++		/* copy out decrypted data */
++		memcpy(buffer, buf->virt, datalen);
+ 
+ err_fput:
+ 		fput(file);
+@@ -401,9 +400,7 @@ error:
+ 		big_key_free_buffer(buf);
+ 	} else {
+ 		ret = datalen;
+-		if (copy_to_user(buffer, key->payload.data[big_key_data],
+-				 datalen) != 0)
+-			ret = -EFAULT;
++		memcpy(buffer, key->payload.data[big_key_data], datalen);
+ 	}
+ 
+ 	return ret;
+diff --git a/security/keys/encrypted-keys/encrypted.c b/security/keys/encrypted-keys/encrypted.c
+index 60720f58cbe0..f6797ba44bf7 100644
+--- a/security/keys/encrypted-keys/encrypted.c
++++ b/security/keys/encrypted-keys/encrypted.c
+@@ -902,14 +902,14 @@ out:
+ }
+ 
+ /*
+- * encrypted_read - format and copy the encrypted data to userspace
++ * encrypted_read - format and copy out the encrypted data
+  *
+  * The resulting datablob format is:
+  * <master-key name> <decrypted data length> <encrypted iv> <encrypted data>
+  *
+  * On success, return to userspace the encrypted key datablob size.
+  */
+-static long encrypted_read(const struct key *key, char __user *buffer,
++static long encrypted_read(const struct key *key, char *buffer,
+ 			   size_t buflen)
+ {
+ 	struct encrypted_key_payload *epayload;
+@@ -957,8 +957,7 @@ static long encrypted_read(const struct key *key, char __user *buffer,
+ 	key_put(mkey);
+ 	memzero_explicit(derived_key, sizeof(derived_key));
+ 
+-	if (copy_to_user(buffer, ascii_buf, asciiblob_len) != 0)
+-		ret = -EFAULT;
++	memcpy(buffer, ascii_buf, asciiblob_len);
+ 	kzfree(ascii_buf);
+ 
+ 	return asciiblob_len;
+diff --git a/security/keys/keyctl.c b/security/keys/keyctl.c
+index d1a3dea58dee..106e16f9006b 100644
+--- a/security/keys/keyctl.c
++++ b/security/keys/keyctl.c
+@@ -797,6 +797,21 @@ error:
+ 	return ret;
+ }
+ 
++/*
++ * Call the read method
++ */
++static long __keyctl_read_key(struct key *key, char *buffer, size_t buflen)
++{
++	long ret;
++
++	down_read(&key->sem);
++	ret = key_validate(key);
++	if (ret == 0)
++		ret = key->type->read(key, buffer, buflen);
++	up_read(&key->sem);
++	return ret;
++}
++
+ /*
+  * Read a key's payload.
+  *
+@@ -812,26 +827,27 @@ long keyctl_read_key(key_serial_t keyid, char __user *buffer, size_t buflen)
+ 	struct key *key;
+ 	key_ref_t key_ref;
+ 	long ret;
++	char *key_data;
+ 
+ 	/* find the key first */
+ 	key_ref = lookup_user_key(keyid, 0, 0);
+ 	if (IS_ERR(key_ref)) {
+ 		ret = -ENOKEY;
+-		goto error;
++		goto out;
+ 	}
+ 
+ 	key = key_ref_to_ptr(key_ref);
+ 
+ 	ret = key_read_state(key);
+ 	if (ret < 0)
+-		goto error2; /* Negatively instantiated */
++		goto key_put_out; /* Negatively instantiated */
+ 
+ 	/* see if we can read it directly */
+ 	ret = key_permission(key_ref, KEY_NEED_READ);
+ 	if (ret == 0)
+ 		goto can_read_key;
+ 	if (ret != -EACCES)
+-		goto error2;
++		goto key_put_out;
+ 
+ 	/* we can't; see if it's searchable from this process's keyrings
+ 	 * - we automatically take account of the fact that it may be
+@@ -839,26 +855,51 @@ long keyctl_read_key(key_serial_t keyid, char __user *buffer, size_t buflen)
+ 	 */
+ 	if (!is_key_possessed(key_ref)) {
+ 		ret = -EACCES;
+-		goto error2;
++		goto key_put_out;
+ 	}
+ 
+ 	/* the key is probably readable - now try to read it */
+ can_read_key:
+-	ret = -EOPNOTSUPP;
+-	if (key->type->read) {
+-		/* Read the data with the semaphore held (since we might sleep)
+-		 * to protect against the key being updated or revoked.
+-		 */
+-		down_read(&key->sem);
+-		ret = key_validate(key);
+-		if (ret == 0)
+-			ret = key->type->read(key, buffer, buflen);
+-		up_read(&key->sem);
++	if (!key->type->read) {
++		ret = -EOPNOTSUPP;
++		goto key_put_out;
+ 	}
+ 
+-error2:
++	if (!buffer || !buflen) {
++		/* Get the key length from the read method */
++		ret = __keyctl_read_key(key, NULL, 0);
++		goto key_put_out;
++	}
++
++	/*
++	 * Read the data with the semaphore held (since we might sleep)
++	 * to protect against the key being updated or revoked.
++	 *
++	 * Allocating a temporary buffer to hold the keys before
++	 * transferring them to user buffer to avoid potential
++	 * deadlock involving page fault and mmap_sem.
++	 */
++	key_data = kmalloc(buflen, GFP_KERNEL);
++
++	if (!key_data) {
++		ret = -ENOMEM;
++		goto key_put_out;
++	}
++	ret = __keyctl_read_key(key, key_data, buflen);
++
++	/*
++	 * Read methods will just return the required length without
++	 * any copying if the provided length isn't large enough.
++	 */
++	if (ret > 0 && ret <= buflen) {
++		if (copy_to_user(buffer, key_data, ret))
++			ret = -EFAULT;
++	}
++	kzfree(key_data);
++
++key_put_out:
+ 	key_put(key);
+-error:
++out:
+ 	return ret;
+ }
+ 
+diff --git a/security/keys/keyring.c b/security/keys/keyring.c
+index febf36c6ddc5..5ca620d31cd3 100644
+--- a/security/keys/keyring.c
++++ b/security/keys/keyring.c
+@@ -459,7 +459,6 @@ static int keyring_read_iterator(const void *object, void *data)
+ {
+ 	struct keyring_read_iterator_context *ctx = data;
+ 	const struct key *key = keyring_ptr_to_key(object);
+-	int ret;
+ 
+ 	kenter("{%s,%d},,{%zu/%zu}",
+ 	       key->type->name, key->serial, ctx->count, ctx->buflen);
+@@ -467,10 +466,7 @@ static int keyring_read_iterator(const void *object, void *data)
+ 	if (ctx->count >= ctx->buflen)
+ 		return 1;
+ 
+-	ret = put_user(key->serial, ctx->buffer);
+-	if (ret < 0)
+-		return ret;
+-	ctx->buffer++;
++	*ctx->buffer++ = key->serial;
+ 	ctx->count += sizeof(key->serial);
+ 	return 0;
+ }
+diff --git a/security/keys/request_key_auth.c b/security/keys/request_key_auth.c
+index ecba39c93fd9..41e9735006d0 100644
+--- a/security/keys/request_key_auth.c
++++ b/security/keys/request_key_auth.c
+@@ -22,7 +22,7 @@ static int request_key_auth_instantiate(struct key *,
+ static void request_key_auth_describe(const struct key *, struct seq_file *);
+ static void request_key_auth_revoke(struct key *);
+ static void request_key_auth_destroy(struct key *);
+-static long request_key_auth_read(const struct key *, char __user *, size_t);
++static long request_key_auth_read(const struct key *, char *, size_t);
+ 
+ /*
+  * The request-key authorisation key type definition.
+@@ -80,7 +80,7 @@ static void request_key_auth_describe(const struct key *key,
+  * - the key's semaphore is read-locked
+  */
+ static long request_key_auth_read(const struct key *key,
+-				  char __user *buffer, size_t buflen)
++				  char *buffer, size_t buflen)
+ {
+ 	struct request_key_auth *rka = dereference_key_locked(key);
+ 	size_t datalen;
+@@ -97,8 +97,7 @@ static long request_key_auth_read(const struct key *key,
+ 		if (buflen > datalen)
+ 			buflen = datalen;
+ 
+-		if (copy_to_user(buffer, rka->callout_info, buflen) != 0)
+-			ret = -EFAULT;
++		memcpy(buffer, rka->callout_info, buflen);
+ 	}
+ 
+ 	return ret;
+diff --git a/security/keys/trusted-keys/trusted_tpm1.c b/security/keys/trusted-keys/trusted_tpm1.c
+index d2c5ec1e040b..8001ab07e63b 100644
+--- a/security/keys/trusted-keys/trusted_tpm1.c
++++ b/security/keys/trusted-keys/trusted_tpm1.c
+@@ -1130,11 +1130,10 @@ out:
+  * trusted_read - copy the sealed blob data to userspace in hex.
+  * On success, return to userspace the trusted key datablob size.
+  */
+-static long trusted_read(const struct key *key, char __user *buffer,
++static long trusted_read(const struct key *key, char *buffer,
+ 			 size_t buflen)
+ {
+ 	const struct trusted_key_payload *p;
+-	char *ascii_buf;
+ 	char *bufp;
+ 	int i;
+ 
+@@ -1143,18 +1142,9 @@ static long trusted_read(const struct key *key, char __user *buffer,
+ 		return -EINVAL;
+ 
+ 	if (buffer && buflen >= 2 * p->blob_len) {
+-		ascii_buf = kmalloc_array(2, p->blob_len, GFP_KERNEL);
+-		if (!ascii_buf)
+-			return -ENOMEM;
+-
+-		bufp = ascii_buf;
++		bufp = buffer;
+ 		for (i = 0; i < p->blob_len; i++)
+ 			bufp = hex_byte_pack(bufp, p->blob[i]);
+-		if (copy_to_user(buffer, ascii_buf, 2 * p->blob_len) != 0) {
+-			kzfree(ascii_buf);
+-			return -EFAULT;
+-		}
+-		kzfree(ascii_buf);
+ 	}
+ 	return 2 * p->blob_len;
+ }
+diff --git a/security/keys/user_defined.c b/security/keys/user_defined.c
+index 6f12de4ce549..07d4287e9084 100644
+--- a/security/keys/user_defined.c
++++ b/security/keys/user_defined.c
+@@ -168,7 +168,7 @@ EXPORT_SYMBOL_GPL(user_describe);
+  * read the key data
+  * - the key's semaphore is read-locked
+  */
+-long user_read(const struct key *key, char __user *buffer, size_t buflen)
++long user_read(const struct key *key, char *buffer, size_t buflen)
+ {
+ 	const struct user_key_payload *upayload;
+ 	long ret;
+@@ -181,8 +181,7 @@ long user_read(const struct key *key, char __user *buffer, size_t buflen)
+ 		if (buflen > upayload->datalen)
+ 			buflen = upayload->datalen;
+ 
+-		if (copy_to_user(buffer, upayload->data, buflen) != 0)
+-			ret = -EFAULT;
++		memcpy(buffer, upayload->data, buflen);
+ 	}
+ 
+ 	return ret;
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index bd093593f8fb..f41d8b7864c1 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -1071,6 +1071,8 @@ static int azx_freeze_noirq(struct device *dev)
+ 	struct azx *chip = card->private_data;
+ 	struct pci_dev *pci = to_pci_dev(dev);
+ 
++	if (!azx_is_pm_ready(card))
++		return 0;
+ 	if (chip->driver_type == AZX_DRIVER_SKL)
+ 		pci_set_power_state(pci, PCI_D3hot);
+ 
+@@ -1083,6 +1085,8 @@ static int azx_thaw_noirq(struct device *dev)
+ 	struct azx *chip = card->private_data;
+ 	struct pci_dev *pci = to_pci_dev(dev);
+ 
++	if (!azx_is_pm_ready(card))
++		return 0;
+ 	if (chip->driver_type == AZX_DRIVER_SKL)
+ 		pci_set_power_state(pci, PCI_D0);
+ 
+@@ -2027,24 +2031,15 @@ static void azx_firmware_cb(const struct firmware *fw, void *context)
+ {
+ 	struct snd_card *card = context;
+ 	struct azx *chip = card->private_data;
+-	struct pci_dev *pci = chip->pci;
+ 
+-	if (!fw) {
+-		dev_err(card->dev, "Cannot load firmware, aborting\n");
+-		goto error;
+-	}
+-
+-	chip->fw = fw;
++	if (fw)
++		chip->fw = fw;
++	else
++		dev_err(card->dev, "Cannot load firmware, continue without patching\n");
+ 	if (!chip->disabled) {
+ 		/* continue probing */
+-		if (azx_probe_continue(chip))
+-			goto error;
++		azx_probe_continue(chip);
+ 	}
+-	return; /* OK */
+-
+- error:
+-	snd_card_free(card);
+-	pci_set_drvdata(pci, NULL);
+ }
+ #endif
+ 
+diff --git a/tools/lib/bpf/netlink.c b/tools/lib/bpf/netlink.c
+index 431bd25c6cdb..6d47345a310b 100644
+--- a/tools/lib/bpf/netlink.c
++++ b/tools/lib/bpf/netlink.c
+@@ -289,7 +289,7 @@ int bpf_get_link_xdp_info(int ifindex, struct xdp_link_info *info,
+ 
+ static __u32 get_xdp_id(struct xdp_link_info *info, __u32 flags)
+ {
+-	if (info->attach_mode != XDP_ATTACHED_MULTI)
++	if (info->attach_mode != XDP_ATTACHED_MULTI && !flags)
+ 		return info->prog_id;
+ 	if (flags & XDP_FLAGS_DRV_MODE)
+ 		return info->drv_prog_id;
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index 4768d91c6d68..2b765bbbef92 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -1011,10 +1011,7 @@ static struct rela *find_jump_table(struct objtool_file *file,
+ 	 * it.
+ 	 */
+ 	for (;
+-	     &insn->list != &file->insn_list &&
+-	     insn->sec == func->sec &&
+-	     insn->offset >= func->offset;
+-
++	     &insn->list != &file->insn_list && insn->func && insn->func->pfunc == func;
+ 	     insn = insn->first_jump_src ?: list_prev_entry(insn, list)) {
+ 
+ 		if (insn != orig_insn && insn->type == INSN_JUMP_DYNAMIC)
+diff --git a/tools/testing/selftests/bpf/prog_tests/get_stack_raw_tp.c b/tools/testing/selftests/bpf/prog_tests/get_stack_raw_tp.c
+index eba9a970703b..925722217edf 100644
+--- a/tools/testing/selftests/bpf/prog_tests/get_stack_raw_tp.c
++++ b/tools/testing/selftests/bpf/prog_tests/get_stack_raw_tp.c
+@@ -82,6 +82,7 @@ static void get_stack_print_output(void *ctx, int cpu, void *data, __u32 size)
+ void test_get_stack_raw_tp(void)
+ {
+ 	const char *file = "./test_get_stack_rawtp.o";
++	const char *file_err = "./test_get_stack_rawtp_err.o";
+ 	const char *prog_name = "raw_tracepoint/sys_enter";
+ 	int i, err, prog_fd, exp_cnt = MAX_CNT_RAWTP;
+ 	struct perf_buffer_opts pb_opts = {};
+@@ -93,6 +94,10 @@ void test_get_stack_raw_tp(void)
+ 	struct bpf_map *map;
+ 	cpu_set_t cpu_set;
+ 
++	err = bpf_prog_load(file_err, BPF_PROG_TYPE_RAW_TRACEPOINT, &obj, &prog_fd);
++	if (CHECK(err >= 0, "prog_load raw tp", "err %d errno %d\n", err, errno))
++		return;
++
+ 	err = bpf_prog_load(file, BPF_PROG_TYPE_RAW_TRACEPOINT, &obj, &prog_fd);
+ 	if (CHECK(err, "prog_load raw tp", "err %d errno %d\n", err, errno))
+ 		return;
+diff --git a/tools/testing/selftests/bpf/progs/test_get_stack_rawtp_err.c b/tools/testing/selftests/bpf/progs/test_get_stack_rawtp_err.c
+new file mode 100644
+index 000000000000..8941a41c2a55
+--- /dev/null
++++ b/tools/testing/selftests/bpf/progs/test_get_stack_rawtp_err.c
+@@ -0,0 +1,26 @@
++// SPDX-License-Identifier: GPL-2.0
++
++#include <linux/bpf.h>
++#include <bpf/bpf_helpers.h>
++
++#define MAX_STACK_RAWTP 10
++
++SEC("raw_tracepoint/sys_enter")
++int bpf_prog2(void *ctx)
++{
++	__u64 stack[MAX_STACK_RAWTP];
++	int error;
++
++	/* set all the flags which should return -EINVAL */
++	error = bpf_get_stack(ctx, stack, 0, -1);
++	if (error < 0)
++		goto loop;
++
++	return error;
++loop:
++	while (1) {
++		error++;
++	}
++}
++
++char _license[] SEC("license") = "GPL";
+diff --git a/tools/testing/selftests/bpf/verifier/bpf_get_stack.c b/tools/testing/selftests/bpf/verifier/bpf_get_stack.c
+index f24d50f09dbe..371926771db5 100644
+--- a/tools/testing/selftests/bpf/verifier/bpf_get_stack.c
++++ b/tools/testing/selftests/bpf/verifier/bpf_get_stack.c
+@@ -9,17 +9,17 @@
+ 	BPF_RAW_INSN(BPF_JMP | BPF_CALL, 0, 0, 0, BPF_FUNC_map_lookup_elem),
+ 	BPF_JMP_IMM(BPF_JEQ, BPF_REG_0, 0, 28),
+ 	BPF_MOV64_REG(BPF_REG_7, BPF_REG_0),
+-	BPF_MOV64_IMM(BPF_REG_9, sizeof(struct test_val)),
++	BPF_MOV64_IMM(BPF_REG_9, sizeof(struct test_val)/2),
+ 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),
+ 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_7),
+-	BPF_MOV64_IMM(BPF_REG_3, sizeof(struct test_val)),
++	BPF_MOV64_IMM(BPF_REG_3, sizeof(struct test_val)/2),
+ 	BPF_MOV64_IMM(BPF_REG_4, 256),
+ 	BPF_EMIT_CALL(BPF_FUNC_get_stack),
+ 	BPF_MOV64_IMM(BPF_REG_1, 0),
+ 	BPF_MOV64_REG(BPF_REG_8, BPF_REG_0),
+ 	BPF_ALU64_IMM(BPF_LSH, BPF_REG_8, 32),
+ 	BPF_ALU64_IMM(BPF_ARSH, BPF_REG_8, 32),
+-	BPF_JMP_REG(BPF_JSLT, BPF_REG_1, BPF_REG_8, 16),
++	BPF_JMP_REG(BPF_JSLT, BPF_REG_8, BPF_REG_1, 16),
+ 	BPF_ALU64_REG(BPF_SUB, BPF_REG_9, BPF_REG_8),
+ 	BPF_MOV64_REG(BPF_REG_2, BPF_REG_7),
+ 	BPF_ALU64_REG(BPF_ADD, BPF_REG_2, BPF_REG_8),
+@@ -29,7 +29,7 @@
+ 	BPF_MOV64_REG(BPF_REG_3, BPF_REG_2),
+ 	BPF_ALU64_REG(BPF_ADD, BPF_REG_3, BPF_REG_1),
+ 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_7),
+-	BPF_MOV64_IMM(BPF_REG_5, sizeof(struct test_val)),
++	BPF_MOV64_IMM(BPF_REG_5, sizeof(struct test_val)/2),
+ 	BPF_ALU64_REG(BPF_ADD, BPF_REG_1, BPF_REG_5),
+ 	BPF_JMP_REG(BPF_JGE, BPF_REG_3, BPF_REG_1, 4),
+ 	BPF_MOV64_REG(BPF_REG_1, BPF_REG_6),


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [gentoo-commits] proj/linux-patches:5.6 commit in: /
@ 2020-04-29 17:55 Mike Pagano
  0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2020-04-29 17:55 UTC (permalink / raw
  To: gentoo-commits

commit:     9073e1453c396ffd5d5019142d436ff31e4826b4
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Apr 29 17:55:14 2020 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Apr 29 17:55:14 2020 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=9073e145

Linux patch 5.6.8

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README            |    4 +
 1007_linux-5.6.8.patch | 6394 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 6398 insertions(+)

diff --git a/0000_README b/0000_README
index 8000cff..d756ad3 100644
--- a/0000_README
+++ b/0000_README
@@ -71,6 +71,10 @@ Patch:  1006_linux-5.6.7.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.6.7
 
+Patch:  1007_linux-5.6.8.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.6.8
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1007_linux-5.6.8.patch b/1007_linux-5.6.8.patch
new file mode 100644
index 0000000..50e5e7d
--- /dev/null
+++ b/1007_linux-5.6.8.patch
@@ -0,0 +1,6394 @@
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index 6ba631cc5a56..20aac805e197 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -5085,8 +5085,7 @@
+ 
+ 	usbcore.old_scheme_first=
+ 			[USB] Start with the old device initialization
+-			scheme,  applies only to low and full-speed devices
+-			 (default 0 = off).
++			scheme (default 0 = off).
+ 
+ 	usbcore.usbfs_memory_mb=
+ 			[USB] Memory limit (in MB) for buffers allocated by
+diff --git a/Makefile b/Makefile
+index b64df959e5d7..e7101c99d81b 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 6
+-SUBLEVEL = 7
++SUBLEVEL = 8
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+ 
+diff --git a/arch/arm/mach-imx/Makefile b/arch/arm/mach-imx/Makefile
+index 03506ce46149..e7364e6c8c6b 100644
+--- a/arch/arm/mach-imx/Makefile
++++ b/arch/arm/mach-imx/Makefile
+@@ -91,8 +91,10 @@ AFLAGS_suspend-imx6.o :=-Wa,-march=armv7-a
+ obj-$(CONFIG_SOC_IMX6) += suspend-imx6.o
+ obj-$(CONFIG_SOC_IMX53) += suspend-imx53.o
+ endif
++ifeq ($(CONFIG_ARM_CPU_SUSPEND),y)
+ AFLAGS_resume-imx6.o :=-Wa,-march=armv7-a
+ obj-$(CONFIG_SOC_IMX6) += resume-imx6.o
++endif
+ obj-$(CONFIG_SOC_IMX6) += pm-imx6.o
+ 
+ obj-$(CONFIG_SOC_IMX1) += mach-imx1.o
+diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
+index 16af0d8d90a8..89e7f891bcd0 100644
+--- a/arch/powerpc/kernel/entry_32.S
++++ b/arch/powerpc/kernel/entry_32.S
+@@ -710,7 +710,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_SPE)
+ 	stw	r10,_CCR(r1)
+ 	stw	r1,KSP(r3)	/* Set old stack pointer */
+ 
+-	kuap_check r2, r4
++	kuap_check r2, r0
+ #ifdef CONFIG_SMP
+ 	/* We need a sync somewhere here to make sure that if the
+ 	 * previous task gets rescheduled on another CPU, it sees all
+diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
+index 438a9befce41..8105010b0e76 100644
+--- a/arch/powerpc/kernel/setup_64.c
++++ b/arch/powerpc/kernel/setup_64.c
+@@ -534,6 +534,8 @@ static bool __init parse_cache_info(struct device_node *np,
+ 	lsizep = of_get_property(np, propnames[3], NULL);
+ 	if (bsizep == NULL)
+ 		bsizep = lsizep;
++	if (lsizep == NULL)
++		lsizep = bsizep;
+ 	if (lsizep != NULL)
+ 		lsize = be32_to_cpu(*lsizep);
+ 	if (bsizep != NULL)
+diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c
+index 1168e8b37e30..716f8d0960a7 100644
+--- a/arch/powerpc/kernel/time.c
++++ b/arch/powerpc/kernel/time.c
+@@ -522,35 +522,6 @@ static inline void clear_irq_work_pending(void)
+ 		"i" (offsetof(struct paca_struct, irq_work_pending)));
+ }
+ 
+-void arch_irq_work_raise(void)
+-{
+-	preempt_disable();
+-	set_irq_work_pending_flag();
+-	/*
+-	 * Non-nmi code running with interrupts disabled will replay
+-	 * irq_happened before it re-enables interrupts, so setthe
+-	 * decrementer there instead of causing a hardware exception
+-	 * which would immediately hit the masked interrupt handler
+-	 * and have the net effect of setting the decrementer in
+-	 * irq_happened.
+-	 *
+-	 * NMI interrupts can not check this when they return, so the
+-	 * decrementer hardware exception is raised, which will fire
+-	 * when interrupts are next enabled.
+-	 *
+-	 * BookE does not support this yet, it must audit all NMI
+-	 * interrupt handlers to ensure they call nmi_enter() so this
+-	 * check would be correct.
+-	 */
+-	if (IS_ENABLED(CONFIG_BOOKE) || !irqs_disabled() || in_nmi()) {
+-		set_dec(1);
+-	} else {
+-		hard_irq_disable();
+-		local_paca->irq_happened |= PACA_IRQ_DEC;
+-	}
+-	preempt_enable();
+-}
+-
+ #else /* 32-bit */
+ 
+ DEFINE_PER_CPU(u8, irq_work_pending);
+@@ -559,16 +530,27 @@ DEFINE_PER_CPU(u8, irq_work_pending);
+ #define test_irq_work_pending()		__this_cpu_read(irq_work_pending)
+ #define clear_irq_work_pending()	__this_cpu_write(irq_work_pending, 0)
+ 
++#endif /* 32 vs 64 bit */
++
+ void arch_irq_work_raise(void)
+ {
++	/*
++	 * 64-bit code that uses irq soft-mask can just cause an immediate
++	 * interrupt here that gets soft masked, if this is called under
++	 * local_irq_disable(). It might be possible to prevent that happening
++	 * by noticing interrupts are disabled and setting decrementer pending
++	 * to be replayed when irqs are enabled. The problem there is that
++	 * tracing can call irq_work_raise, including in code that does low
++	 * level manipulations of irq soft-mask state (e.g., trace_hardirqs_on)
++	 * which could get tangled up if we're messing with the same state
++	 * here.
++	 */
+ 	preempt_disable();
+ 	set_irq_work_pending_flag();
+ 	set_dec(1);
+ 	preempt_enable();
+ }
+ 
+-#endif /* 32 vs 64 bit */
+-
+ #else  /* CONFIG_IRQ_WORK */
+ 
+ #define test_irq_work_pending()	0
+diff --git a/arch/powerpc/mm/nohash/8xx.c b/arch/powerpc/mm/nohash/8xx.c
+index 3189308dece4..d83a12c5bc7f 100644
+--- a/arch/powerpc/mm/nohash/8xx.c
++++ b/arch/powerpc/mm/nohash/8xx.c
+@@ -185,6 +185,7 @@ void mmu_mark_initmem_nx(void)
+ 			mmu_mapin_ram_chunk(etext8, einittext8, PAGE_KERNEL);
+ 		}
+ 	}
++	_tlbil_all();
+ }
+ 
+ #ifdef CONFIG_STRICT_KERNEL_RWX
+@@ -199,6 +200,8 @@ void mmu_mark_rodata_ro(void)
+ 				      ~(LARGE_PAGE_SIZE_8M - 1)));
+ 	mmu_patch_addis(&patch__dtlbmiss_romem_top, -__pa(_sinittext));
+ 
++	_tlbil_all();
++
+ 	/* Update page tables for PTDUMP and BDI */
+ 	mmu_mapin_ram_chunk(0, sinittext, __pgprot(0));
+ 	mmu_mapin_ram_chunk(0, etext, PAGE_KERNEL_ROX);
+diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype
+index 6caedc88474f..3b5ffc92715d 100644
+--- a/arch/powerpc/platforms/Kconfig.cputype
++++ b/arch/powerpc/platforms/Kconfig.cputype
+@@ -397,7 +397,7 @@ config PPC_KUAP
+ 
+ config PPC_KUAP_DEBUG
+ 	bool "Extra debugging for Kernel Userspace Access Protection"
+-	depends on PPC_HAVE_KUAP && (PPC_RADIX_MMU || PPC_32)
++	depends on PPC_KUAP && (PPC_RADIX_MMU || PPC32)
+ 	help
+ 	  Add extra debugging for Kernel Userspace Access Protection (KUAP)
+ 	  If you're unsure, say N.
+diff --git a/arch/powerpc/platforms/pseries/ras.c b/arch/powerpc/platforms/pseries/ras.c
+index 1d7f973c647b..43710b69e09e 100644
+--- a/arch/powerpc/platforms/pseries/ras.c
++++ b/arch/powerpc/platforms/pseries/ras.c
+@@ -683,6 +683,17 @@ static int mce_handle_error(struct pt_regs *regs, struct rtas_error_log *errp)
+ #endif
+ 
+ out:
++	/*
++	 * Enable translation as we will be accessing per-cpu variables
++	 * in save_mce_event() which may fall outside RMO region, also
++	 * leave it enabled because subsequently we will be queuing work
++	 * to workqueues where again per-cpu variables accessed, besides
++	 * fwnmi_release_errinfo() crashes when called in realmode on
++	 * pseries.
++	 * Note: All the realmode handling like flushing SLB entries for
++	 *       SLB multihit is done by now.
++	 */
++	mtmsr(mfmsr() | MSR_IR | MSR_DR);
+ 	save_mce_event(regs, disposition == RTAS_DISP_FULLY_RECOVERED,
+ 			&mce_err, regs->nip, eaddr, paddr);
+ 
+diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
+index c2e6d4ba4e23..198a6b320018 100644
+--- a/arch/s390/kvm/kvm-s390.c
++++ b/arch/s390/kvm/kvm-s390.c
+@@ -1930,6 +1930,9 @@ static int gfn_to_memslot_approx(struct kvm_memslots *slots, gfn_t gfn)
+ 			start = slot + 1;
+ 	}
+ 
++	if (start >= slots->used_slots)
++		return slots->used_slots - 1;
++
+ 	if (gfn >= memslots[start].base_gfn &&
+ 	    gfn < memslots[start].base_gfn + memslots[start].npages) {
+ 		atomic_set(&slots->lru_slot, start);
+diff --git a/arch/s390/lib/uaccess.c b/arch/s390/lib/uaccess.c
+index c4f8039a35e8..0267405ab7c6 100644
+--- a/arch/s390/lib/uaccess.c
++++ b/arch/s390/lib/uaccess.c
+@@ -64,10 +64,13 @@ mm_segment_t enable_sacf_uaccess(void)
+ {
+ 	mm_segment_t old_fs;
+ 	unsigned long asce, cr;
++	unsigned long flags;
+ 
+ 	old_fs = current->thread.mm_segment;
+ 	if (old_fs & 1)
+ 		return old_fs;
++	/* protect against a concurrent page table upgrade */
++	local_irq_save(flags);
+ 	current->thread.mm_segment |= 1;
+ 	asce = S390_lowcore.kernel_asce;
+ 	if (likely(old_fs == USER_DS)) {
+@@ -83,6 +86,7 @@ mm_segment_t enable_sacf_uaccess(void)
+ 		__ctl_load(asce, 7, 7);
+ 		set_cpu_flag(CIF_ASCE_SECONDARY);
+ 	}
++	local_irq_restore(flags);
+ 	return old_fs;
+ }
+ EXPORT_SYMBOL(enable_sacf_uaccess);
+diff --git a/arch/s390/mm/pgalloc.c b/arch/s390/mm/pgalloc.c
+index 3dd253f81a77..46071be897ab 100644
+--- a/arch/s390/mm/pgalloc.c
++++ b/arch/s390/mm/pgalloc.c
+@@ -70,8 +70,20 @@ static void __crst_table_upgrade(void *arg)
+ {
+ 	struct mm_struct *mm = arg;
+ 
+-	if (current->active_mm == mm)
+-		set_user_asce(mm);
++	/* we must change all active ASCEs to avoid the creation of new TLBs */
++	if (current->active_mm == mm) {
++		S390_lowcore.user_asce = mm->context.asce;
++		if (current->thread.mm_segment == USER_DS) {
++			__ctl_load(S390_lowcore.user_asce, 1, 1);
++			/* Mark user-ASCE present in CR1 */
++			clear_cpu_flag(CIF_ASCE_PRIMARY);
++		}
++		if (current->thread.mm_segment == USER_DS_SACF) {
++			__ctl_load(S390_lowcore.user_asce, 7, 7);
++			/* enable_sacf_uaccess does all or nothing */
++			WARN_ON(!test_cpu_flag(CIF_ASCE_SECONDARY));
++		}
++	}
+ 	__tlb_flush_local();
+ }
+ 
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index 0a7867897507..c1ffe7d24f83 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -4571,7 +4571,7 @@ static int handle_rmode_exception(struct kvm_vcpu *vcpu,
+  */
+ static void kvm_machine_check(void)
+ {
+-#if defined(CONFIG_X86_MCE) && defined(CONFIG_X86_64)
++#if defined(CONFIG_X86_MCE)
+ 	struct pt_regs regs = {
+ 		.cs = 3, /* Fake ring 3 no matter what the guest ran on */
+ 		.flags = X86_EFLAGS_IF,
+diff --git a/block/partition-generic.c b/block/partition-generic.c
+index 564fae77711d..ebe4c2e9834b 100644
+--- a/block/partition-generic.c
++++ b/block/partition-generic.c
+@@ -468,7 +468,7 @@ int blk_drop_partitions(struct gendisk *disk, struct block_device *bdev)
+ 
+ 	if (!disk_part_scan_enabled(disk))
+ 		return 0;
+-	if (bdev->bd_part_count || bdev->bd_super)
++	if (bdev->bd_part_count || bdev->bd_openers > 1)
+ 		return -EBUSY;
+ 	res = invalidate_partition(disk, 0);
+ 	if (res)
+diff --git a/drivers/block/loop.c b/drivers/block/loop.c
+index 739b372a5112..d943e713d5e3 100644
+--- a/drivers/block/loop.c
++++ b/drivers/block/loop.c
+@@ -427,11 +427,12 @@ static int lo_fallocate(struct loop_device *lo, struct request *rq, loff_t pos,
+ 	 * information.
+ 	 */
+ 	struct file *file = lo->lo_backing_file;
++	struct request_queue *q = lo->lo_queue;
+ 	int ret;
+ 
+ 	mode |= FALLOC_FL_KEEP_SIZE;
+ 
+-	if ((!file->f_op->fallocate) || lo->lo_encrypt_key_size) {
++	if (!blk_queue_discard(q)) {
+ 		ret = -EOPNOTSUPP;
+ 		goto out;
+ 	}
+@@ -865,28 +866,47 @@ static void loop_config_discard(struct loop_device *lo)
+ 	struct inode *inode = file->f_mapping->host;
+ 	struct request_queue *q = lo->lo_queue;
+ 
++	/*
++	 * If the backing device is a block device, mirror its zeroing
++	 * capability. Set the discard sectors to the block device's zeroing
++	 * capabilities because loop discards result in blkdev_issue_zeroout(),
++	 * not blkdev_issue_discard(). This maintains consistent behavior with
++	 * file-backed loop devices: discarded regions read back as zero.
++	 */
++	if (S_ISBLK(inode->i_mode) && !lo->lo_encrypt_key_size) {
++		struct request_queue *backingq;
++
++		backingq = bdev_get_queue(inode->i_bdev);
++		blk_queue_max_discard_sectors(q,
++			backingq->limits.max_write_zeroes_sectors);
++
++		blk_queue_max_write_zeroes_sectors(q,
++			backingq->limits.max_write_zeroes_sectors);
++
+ 	/*
+ 	 * We use punch hole to reclaim the free space used by the
+ 	 * image a.k.a. discard. However we do not support discard if
+ 	 * encryption is enabled, because it may give an attacker
+ 	 * useful information.
+ 	 */
+-	if ((!file->f_op->fallocate) ||
+-	    lo->lo_encrypt_key_size) {
++	} else if (!file->f_op->fallocate || lo->lo_encrypt_key_size) {
+ 		q->limits.discard_granularity = 0;
+ 		q->limits.discard_alignment = 0;
+ 		blk_queue_max_discard_sectors(q, 0);
+ 		blk_queue_max_write_zeroes_sectors(q, 0);
+-		blk_queue_flag_clear(QUEUE_FLAG_DISCARD, q);
+-		return;
+-	}
+ 
+-	q->limits.discard_granularity = inode->i_sb->s_blocksize;
+-	q->limits.discard_alignment = 0;
++	} else {
++		q->limits.discard_granularity = inode->i_sb->s_blocksize;
++		q->limits.discard_alignment = 0;
+ 
+-	blk_queue_max_discard_sectors(q, UINT_MAX >> 9);
+-	blk_queue_max_write_zeroes_sectors(q, UINT_MAX >> 9);
+-	blk_queue_flag_set(QUEUE_FLAG_DISCARD, q);
++		blk_queue_max_discard_sectors(q, UINT_MAX >> 9);
++		blk_queue_max_write_zeroes_sectors(q, UINT_MAX >> 9);
++	}
++
++	if (q->limits.max_write_zeroes_sectors)
++		blk_queue_flag_set(QUEUE_FLAG_DISCARD, q);
++	else
++		blk_queue_flag_clear(QUEUE_FLAG_DISCARD, q);
+ }
+ 
+ static void loop_unprepare_queue(struct loop_device *lo)
+diff --git a/drivers/char/tpm/tpm-interface.c b/drivers/char/tpm/tpm-interface.c
+index a438b1206fcb..1621ce818705 100644
+--- a/drivers/char/tpm/tpm-interface.c
++++ b/drivers/char/tpm/tpm-interface.c
+@@ -323,7 +323,7 @@ int tpm_pcr_extend(struct tpm_chip *chip, u32 pcr_idx,
+ 
+ 	for (i = 0; i < chip->nr_allocated_banks; i++) {
+ 		if (digests[i].alg_id != chip->allocated_banks[i].alg_id) {
+-			rc = EINVAL;
++			rc = -EINVAL;
+ 			goto out;
+ 		}
+ 	}
+diff --git a/drivers/char/tpm/tpm_ibmvtpm.c b/drivers/char/tpm/tpm_ibmvtpm.c
+index 78cc52690177..e82013d587b4 100644
+--- a/drivers/char/tpm/tpm_ibmvtpm.c
++++ b/drivers/char/tpm/tpm_ibmvtpm.c
+@@ -1,6 +1,6 @@
+ // SPDX-License-Identifier: GPL-2.0-only
+ /*
+- * Copyright (C) 2012 IBM Corporation
++ * Copyright (C) 2012-2020 IBM Corporation
+  *
+  * Author: Ashley Lai <ashleydlai@gmail.com>
+  *
+@@ -133,6 +133,64 @@ static int tpm_ibmvtpm_recv(struct tpm_chip *chip, u8 *buf, size_t count)
+ 	return len;
+ }
+ 
++/**
++ * ibmvtpm_crq_send_init - Send a CRQ initialize message
++ * @ibmvtpm:	vtpm device struct
++ *
++ * Return:
++ *	0 on success.
++ *	Non-zero on failure.
++ */
++static int ibmvtpm_crq_send_init(struct ibmvtpm_dev *ibmvtpm)
++{
++	int rc;
++
++	rc = ibmvtpm_send_crq_word(ibmvtpm->vdev, INIT_CRQ_CMD);
++	if (rc != H_SUCCESS)
++		dev_err(ibmvtpm->dev,
++			"%s failed rc=%d\n", __func__, rc);
++
++	return rc;
++}
++
++/**
++ * tpm_ibmvtpm_resume - Resume from suspend
++ *
++ * @dev:	device struct
++ *
++ * Return: Always 0.
++ */
++static int tpm_ibmvtpm_resume(struct device *dev)
++{
++	struct tpm_chip *chip = dev_get_drvdata(dev);
++	struct ibmvtpm_dev *ibmvtpm = dev_get_drvdata(&chip->dev);
++	int rc = 0;
++
++	do {
++		if (rc)
++			msleep(100);
++		rc = plpar_hcall_norets(H_ENABLE_CRQ,
++					ibmvtpm->vdev->unit_address);
++	} while (rc == H_IN_PROGRESS || rc == H_BUSY || H_IS_LONG_BUSY(rc));
++
++	if (rc) {
++		dev_err(dev, "Error enabling ibmvtpm rc=%d\n", rc);
++		return rc;
++	}
++
++	rc = vio_enable_interrupts(ibmvtpm->vdev);
++	if (rc) {
++		dev_err(dev, "Error vio_enable_interrupts rc=%d\n", rc);
++		return rc;
++	}
++
++	rc = ibmvtpm_crq_send_init(ibmvtpm);
++	if (rc)
++		dev_err(dev, "Error send_init rc=%d\n", rc);
++
++	return rc;
++}
++
+ /**
+  * tpm_ibmvtpm_send() - Send a TPM command
+  * @chip:	tpm chip struct
+@@ -146,6 +204,7 @@ static int tpm_ibmvtpm_recv(struct tpm_chip *chip, u8 *buf, size_t count)
+ static int tpm_ibmvtpm_send(struct tpm_chip *chip, u8 *buf, size_t count)
+ {
+ 	struct ibmvtpm_dev *ibmvtpm = dev_get_drvdata(&chip->dev);
++	bool retry = true;
+ 	int rc, sig;
+ 
+ 	if (!ibmvtpm->rtce_buf) {
+@@ -179,18 +238,27 @@ static int tpm_ibmvtpm_send(struct tpm_chip *chip, u8 *buf, size_t count)
+ 	 */
+ 	ibmvtpm->tpm_processing_cmd = true;
+ 
++again:
+ 	rc = ibmvtpm_send_crq(ibmvtpm->vdev,
+ 			IBMVTPM_VALID_CMD, VTPM_TPM_COMMAND,
+ 			count, ibmvtpm->rtce_dma_handle);
+ 	if (rc != H_SUCCESS) {
++		/*
++		 * H_CLOSED can be returned after LPM resume.  Call
++		 * tpm_ibmvtpm_resume() to re-enable the CRQ then retry
++		 * ibmvtpm_send_crq() once before failing.
++		 */
++		if (rc == H_CLOSED && retry) {
++			tpm_ibmvtpm_resume(ibmvtpm->dev);
++			retry = false;
++			goto again;
++		}
+ 		dev_err(ibmvtpm->dev, "tpm_ibmvtpm_send failed rc=%d\n", rc);
+-		rc = 0;
+ 		ibmvtpm->tpm_processing_cmd = false;
+-	} else
+-		rc = 0;
++	}
+ 
+ 	spin_unlock(&ibmvtpm->rtce_lock);
+-	return rc;
++	return 0;
+ }
+ 
+ static void tpm_ibmvtpm_cancel(struct tpm_chip *chip)
+@@ -268,26 +336,6 @@ static int ibmvtpm_crq_send_init_complete(struct ibmvtpm_dev *ibmvtpm)
+ 	return rc;
+ }
+ 
+-/**
+- * ibmvtpm_crq_send_init - Send a CRQ initialize message
+- * @ibmvtpm:	vtpm device struct
+- *
+- * Return:
+- *	0 on success.
+- *	Non-zero on failure.
+- */
+-static int ibmvtpm_crq_send_init(struct ibmvtpm_dev *ibmvtpm)
+-{
+-	int rc;
+-
+-	rc = ibmvtpm_send_crq_word(ibmvtpm->vdev, INIT_CRQ_CMD);
+-	if (rc != H_SUCCESS)
+-		dev_err(ibmvtpm->dev,
+-			"ibmvtpm_crq_send_init failed rc=%d\n", rc);
+-
+-	return rc;
+-}
+-
+ /**
+  * tpm_ibmvtpm_remove - ibm vtpm remove entry point
+  * @vdev:	vio device struct
+@@ -400,44 +448,6 @@ static int ibmvtpm_reset_crq(struct ibmvtpm_dev *ibmvtpm)
+ 				  ibmvtpm->crq_dma_handle, CRQ_RES_BUF_SIZE);
+ }
+ 
+-/**
+- * tpm_ibmvtpm_resume - Resume from suspend
+- *
+- * @dev:	device struct
+- *
+- * Return: Always 0.
+- */
+-static int tpm_ibmvtpm_resume(struct device *dev)
+-{
+-	struct tpm_chip *chip = dev_get_drvdata(dev);
+-	struct ibmvtpm_dev *ibmvtpm = dev_get_drvdata(&chip->dev);
+-	int rc = 0;
+-
+-	do {
+-		if (rc)
+-			msleep(100);
+-		rc = plpar_hcall_norets(H_ENABLE_CRQ,
+-					ibmvtpm->vdev->unit_address);
+-	} while (rc == H_IN_PROGRESS || rc == H_BUSY || H_IS_LONG_BUSY(rc));
+-
+-	if (rc) {
+-		dev_err(dev, "Error enabling ibmvtpm rc=%d\n", rc);
+-		return rc;
+-	}
+-
+-	rc = vio_enable_interrupts(ibmvtpm->vdev);
+-	if (rc) {
+-		dev_err(dev, "Error vio_enable_interrupts rc=%d\n", rc);
+-		return rc;
+-	}
+-
+-	rc = ibmvtpm_crq_send_init(ibmvtpm);
+-	if (rc)
+-		dev_err(dev, "Error send_init rc=%d\n", rc);
+-
+-	return rc;
+-}
+-
+ static bool tpm_ibmvtpm_req_canceled(struct tpm_chip *chip, u8 status)
+ {
+ 	return (status == 0);
+diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c
+index 27c6ca031e23..2435216bd10a 100644
+--- a/drivers/char/tpm/tpm_tis_core.c
++++ b/drivers/char/tpm/tpm_tis_core.c
+@@ -433,6 +433,9 @@ static void disable_interrupts(struct tpm_chip *chip)
+ 	u32 intmask;
+ 	int rc;
+ 
++	if (priv->irq == 0)
++		return;
++
+ 	rc = tpm_tis_read32(priv, TPM_INT_ENABLE(priv->locality), &intmask);
+ 	if (rc < 0)
+ 		intmask = 0;
+@@ -1062,9 +1065,12 @@ int tpm_tis_core_init(struct device *dev, struct tpm_tis_data *priv, int irq,
+ 		if (irq) {
+ 			tpm_tis_probe_irq_single(chip, intmask, IRQF_SHARED,
+ 						 irq);
+-			if (!(chip->flags & TPM_CHIP_FLAG_IRQ))
++			if (!(chip->flags & TPM_CHIP_FLAG_IRQ)) {
+ 				dev_err(&chip->dev, FW_BUG
+ 					"TPM interrupt not working, polling instead\n");
++
++				disable_interrupts(chip);
++			}
+ 		} else {
+ 			tpm_tis_probe_irq(chip, intmask);
+ 		}
+diff --git a/drivers/fpga/dfl-pci.c b/drivers/fpga/dfl-pci.c
+index 89ca292236ad..538755062ab7 100644
+--- a/drivers/fpga/dfl-pci.c
++++ b/drivers/fpga/dfl-pci.c
+@@ -248,11 +248,13 @@ static int cci_pci_sriov_configure(struct pci_dev *pcidev, int num_vfs)
+ 			return ret;
+ 
+ 		ret = pci_enable_sriov(pcidev, num_vfs);
+-		if (ret)
++		if (ret) {
+ 			dfl_fpga_cdev_config_ports_pf(cdev);
++			return ret;
++		}
+ 	}
+ 
+-	return ret;
++	return num_vfs;
+ }
+ 
+ static void cci_pci_remove(struct pci_dev *pcidev)
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index 04441dbcba76..188e51600070 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -283,6 +283,8 @@ bool dc_stream_adjust_vmin_vmax(struct dc *dc,
+ 	int i = 0;
+ 	bool ret = false;
+ 
++	stream->adjust = *adjust;
++
+ 	for (i = 0; i < MAX_PIPES; i++) {
+ 		struct pipe_ctx *pipe = &dc->current_state->res_ctx.pipe_ctx[i];
+ 
+@@ -2347,7 +2349,7 @@ void dc_commit_updates_for_stream(struct dc *dc,
+ 	enum surface_update_type update_type;
+ 	struct dc_state *context;
+ 	struct dc_context *dc_ctx = dc->ctx;
+-	int i;
++	int i, j;
+ 
+ 	stream_status = dc_stream_get_status(stream);
+ 	context = dc->current_state;
+@@ -2385,6 +2387,17 @@ void dc_commit_updates_for_stream(struct dc *dc,
+ 
+ 		copy_surface_update_to_plane(surface, &srf_updates[i]);
+ 
++		if (update_type >= UPDATE_TYPE_MED) {
++			for (j = 0; j < dc->res_pool->pipe_count; j++) {
++				struct pipe_ctx *pipe_ctx =
++					&context->res_ctx.pipe_ctx[j];
++
++				if (pipe_ctx->plane_state != surface)
++					continue;
++
++				resource_build_scaling_params(pipe_ctx);
++			}
++		}
+ 	}
+ 
+ 	copy_stream_update_to_stream(dc, context, stream, stream_update);
+diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c b/drivers/gpu/drm/drm_dp_mst_topology.c
+index 7b7f0da01346..22713ef0eac8 100644
+--- a/drivers/gpu/drm/drm_dp_mst_topology.c
++++ b/drivers/gpu/drm/drm_dp_mst_topology.c
+@@ -4290,6 +4290,7 @@ int drm_dp_atomic_release_vcpi_slots(struct drm_atomic_state *state,
+ 	if (pos->vcpi) {
+ 		drm_dp_mst_put_port_malloc(port);
+ 		pos->vcpi = 0;
++		pos->pbn = 0;
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/gpu/drm/i915/gt/intel_rps.c b/drivers/gpu/drm/i915/gt/intel_rps.c
+index b2d245963d9f..8accea06185b 100644
+--- a/drivers/gpu/drm/i915/gt/intel_rps.c
++++ b/drivers/gpu/drm/i915/gt/intel_rps.c
+@@ -83,7 +83,8 @@ static void rps_enable_interrupts(struct intel_rps *rps)
+ 	gen6_gt_pm_enable_irq(gt, rps->pm_events);
+ 	spin_unlock_irq(&gt->irq_lock);
+ 
+-	set(gt->uncore, GEN6_PMINTRMSK, rps_pm_mask(rps, rps->cur_freq));
++	intel_uncore_write(gt->uncore,
++			   GEN6_PMINTRMSK, rps_pm_mask(rps, rps->last_freq));
+ }
+ 
+ static void gen6_rps_reset_interrupts(struct intel_rps *rps)
+@@ -117,7 +118,8 @@ static void rps_disable_interrupts(struct intel_rps *rps)
+ 
+ 	rps->pm_events = 0;
+ 
+-	set(gt->uncore, GEN6_PMINTRMSK, rps_pm_sanitize_mask(rps, ~0u));
++	intel_uncore_write(gt->uncore,
++			   GEN6_PMINTRMSK, rps_pm_sanitize_mask(rps, ~0u));
+ 
+ 	spin_lock_irq(&gt->irq_lock);
+ 	gen6_gt_pm_disable_irq(gt, GEN6_PM_RPS_EVENTS);
+diff --git a/drivers/iio/adc/stm32-adc.c b/drivers/iio/adc/stm32-adc.c
+index 80c3f963527b..ae622ee6d08c 100644
+--- a/drivers/iio/adc/stm32-adc.c
++++ b/drivers/iio/adc/stm32-adc.c
+@@ -1418,8 +1418,30 @@ static unsigned int stm32_adc_dma_residue(struct stm32_adc *adc)
+ static void stm32_adc_dma_buffer_done(void *data)
+ {
+ 	struct iio_dev *indio_dev = data;
++	struct stm32_adc *adc = iio_priv(indio_dev);
++	int residue = stm32_adc_dma_residue(adc);
++
++	/*
++	 * In DMA mode the trigger services of IIO are not used
++	 * (e.g. no call to iio_trigger_poll).
++	 * Calling irq handler associated to the hardware trigger is not
++	 * relevant as the conversions have already been done. Data
++	 * transfers are performed directly in DMA callback instead.
++	 * This implementation avoids to call trigger irq handler that
++	 * may sleep, in an atomic context (DMA irq handler context).
++	 */
++	dev_dbg(&indio_dev->dev, "%s bufi=%d\n", __func__, adc->bufi);
+ 
+-	iio_trigger_poll_chained(indio_dev->trig);
++	while (residue >= indio_dev->scan_bytes) {
++		u16 *buffer = (u16 *)&adc->rx_buf[adc->bufi];
++
++		iio_push_to_buffers(indio_dev, buffer);
++
++		residue -= indio_dev->scan_bytes;
++		adc->bufi += indio_dev->scan_bytes;
++		if (adc->bufi >= adc->rx_buf_sz)
++			adc->bufi = 0;
++	}
+ }
+ 
+ static int stm32_adc_dma_start(struct iio_dev *indio_dev)
+@@ -1845,6 +1867,7 @@ static int stm32_adc_probe(struct platform_device *pdev)
+ {
+ 	struct iio_dev *indio_dev;
+ 	struct device *dev = &pdev->dev;
++	irqreturn_t (*handler)(int irq, void *p) = NULL;
+ 	struct stm32_adc *adc;
+ 	int ret;
+ 
+@@ -1911,9 +1934,11 @@ static int stm32_adc_probe(struct platform_device *pdev)
+ 	if (ret < 0)
+ 		return ret;
+ 
++	if (!adc->dma_chan)
++		handler = &stm32_adc_trigger_handler;
++
+ 	ret = iio_triggered_buffer_setup(indio_dev,
+-					 &iio_pollfunc_store_time,
+-					 &stm32_adc_trigger_handler,
++					 &iio_pollfunc_store_time, handler,
+ 					 &stm32_adc_buffer_setup_ops);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "buffer setup failed\n");
+diff --git a/drivers/iio/adc/ti-ads8344.c b/drivers/iio/adc/ti-ads8344.c
+index 9a460807d46d..abe4b56c847c 100644
+--- a/drivers/iio/adc/ti-ads8344.c
++++ b/drivers/iio/adc/ti-ads8344.c
+@@ -29,7 +29,7 @@ struct ads8344 {
+ 	struct mutex lock;
+ 
+ 	u8 tx_buf ____cacheline_aligned;
+-	u16 rx_buf;
++	u8 rx_buf[3];
+ };
+ 
+ #define ADS8344_VOLTAGE_CHANNEL(chan, si)				\
+@@ -89,11 +89,11 @@ static int ads8344_adc_conversion(struct ads8344 *adc, int channel,
+ 
+ 	udelay(9);
+ 
+-	ret = spi_read(spi, &adc->rx_buf, 2);
++	ret = spi_read(spi, adc->rx_buf, sizeof(adc->rx_buf));
+ 	if (ret)
+ 		return ret;
+ 
+-	return adc->rx_buf;
++	return adc->rx_buf[0] << 9 | adc->rx_buf[1] << 1 | adc->rx_buf[2] >> 7;
+ }
+ 
+ static int ads8344_read_raw(struct iio_dev *iio,
+diff --git a/drivers/iio/adc/xilinx-xadc-core.c b/drivers/iio/adc/xilinx-xadc-core.c
+index ec227b358cd6..6fd06e4eff73 100644
+--- a/drivers/iio/adc/xilinx-xadc-core.c
++++ b/drivers/iio/adc/xilinx-xadc-core.c
+@@ -102,6 +102,16 @@ static const unsigned int XADC_ZYNQ_UNMASK_TIMEOUT = 500;
+ 
+ #define XADC_FLAGS_BUFFERED BIT(0)
+ 
++/*
++ * The XADC hardware supports a samplerate of up to 1MSPS. Unfortunately it does
++ * not have a hardware FIFO. Which means an interrupt is generated for each
++ * conversion sequence. At 1MSPS sample rate the CPU in ZYNQ7000 is completely
++ * overloaded by the interrupts that it soft-lockups. For this reason the driver
++ * limits the maximum samplerate 150kSPS. At this rate the CPU is fairly busy,
++ * but still responsive.
++ */
++#define XADC_MAX_SAMPLERATE 150000
++
+ static void xadc_write_reg(struct xadc *xadc, unsigned int reg,
+ 	uint32_t val)
+ {
+@@ -674,7 +684,7 @@ static int xadc_trigger_set_state(struct iio_trigger *trigger, bool state)
+ 
+ 	spin_lock_irqsave(&xadc->lock, flags);
+ 	xadc_read_reg(xadc, XADC_AXI_REG_IPIER, &val);
+-	xadc_write_reg(xadc, XADC_AXI_REG_IPISR, val & XADC_AXI_INT_EOS);
++	xadc_write_reg(xadc, XADC_AXI_REG_IPISR, XADC_AXI_INT_EOS);
+ 	if (state)
+ 		val |= XADC_AXI_INT_EOS;
+ 	else
+@@ -722,13 +732,14 @@ static int xadc_power_adc_b(struct xadc *xadc, unsigned int seq_mode)
+ {
+ 	uint16_t val;
+ 
++	/* Powerdown the ADC-B when it is not needed. */
+ 	switch (seq_mode) {
+ 	case XADC_CONF1_SEQ_SIMULTANEOUS:
+ 	case XADC_CONF1_SEQ_INDEPENDENT:
+-		val = XADC_CONF2_PD_ADC_B;
++		val = 0;
+ 		break;
+ 	default:
+-		val = 0;
++		val = XADC_CONF2_PD_ADC_B;
+ 		break;
+ 	}
+ 
+@@ -797,6 +808,16 @@ static int xadc_preenable(struct iio_dev *indio_dev)
+ 	if (ret)
+ 		goto err;
+ 
++	/*
++	 * In simultaneous mode the upper and lower aux channels are samples at
++	 * the same time. In this mode the upper 8 bits in the sequencer
++	 * register are don't care and the lower 8 bits control two channels
++	 * each. As such we must set the bit if either the channel in the lower
++	 * group or the upper group is enabled.
++	 */
++	if (seq_mode == XADC_CONF1_SEQ_SIMULTANEOUS)
++		scan_mask = ((scan_mask >> 8) | scan_mask) & 0xff0000;
++
+ 	ret = xadc_write_adc_reg(xadc, XADC_REG_SEQ(1), scan_mask >> 16);
+ 	if (ret)
+ 		goto err;
+@@ -823,11 +844,27 @@ static const struct iio_buffer_setup_ops xadc_buffer_ops = {
+ 	.postdisable = &xadc_postdisable,
+ };
+ 
++static int xadc_read_samplerate(struct xadc *xadc)
++{
++	unsigned int div;
++	uint16_t val16;
++	int ret;
++
++	ret = xadc_read_adc_reg(xadc, XADC_REG_CONF2, &val16);
++	if (ret)
++		return ret;
++
++	div = (val16 & XADC_CONF2_DIV_MASK) >> XADC_CONF2_DIV_OFFSET;
++	if (div < 2)
++		div = 2;
++
++	return xadc_get_dclk_rate(xadc) / div / 26;
++}
++
+ static int xadc_read_raw(struct iio_dev *indio_dev,
+ 	struct iio_chan_spec const *chan, int *val, int *val2, long info)
+ {
+ 	struct xadc *xadc = iio_priv(indio_dev);
+-	unsigned int div;
+ 	uint16_t val16;
+ 	int ret;
+ 
+@@ -880,41 +917,31 @@ static int xadc_read_raw(struct iio_dev *indio_dev,
+ 		*val = -((273150 << 12) / 503975);
+ 		return IIO_VAL_INT;
+ 	case IIO_CHAN_INFO_SAMP_FREQ:
+-		ret = xadc_read_adc_reg(xadc, XADC_REG_CONF2, &val16);
+-		if (ret)
++		ret = xadc_read_samplerate(xadc);
++		if (ret < 0)
+ 			return ret;
+ 
+-		div = (val16 & XADC_CONF2_DIV_MASK) >> XADC_CONF2_DIV_OFFSET;
+-		if (div < 2)
+-			div = 2;
+-
+-		*val = xadc_get_dclk_rate(xadc) / div / 26;
+-
++		*val = ret;
+ 		return IIO_VAL_INT;
+ 	default:
+ 		return -EINVAL;
+ 	}
+ }
+ 
+-static int xadc_write_raw(struct iio_dev *indio_dev,
+-	struct iio_chan_spec const *chan, int val, int val2, long info)
++static int xadc_write_samplerate(struct xadc *xadc, int val)
+ {
+-	struct xadc *xadc = iio_priv(indio_dev);
+ 	unsigned long clk_rate = xadc_get_dclk_rate(xadc);
+ 	unsigned int div;
+ 
+ 	if (!clk_rate)
+ 		return -EINVAL;
+ 
+-	if (info != IIO_CHAN_INFO_SAMP_FREQ)
+-		return -EINVAL;
+-
+ 	if (val <= 0)
+ 		return -EINVAL;
+ 
+ 	/* Max. 150 kSPS */
+-	if (val > 150000)
+-		val = 150000;
++	if (val > XADC_MAX_SAMPLERATE)
++		val = XADC_MAX_SAMPLERATE;
+ 
+ 	val *= 26;
+ 
+@@ -927,7 +954,7 @@ static int xadc_write_raw(struct iio_dev *indio_dev,
+ 	 * limit.
+ 	 */
+ 	div = clk_rate / val;
+-	if (clk_rate / div / 26 > 150000)
++	if (clk_rate / div / 26 > XADC_MAX_SAMPLERATE)
+ 		div++;
+ 	if (div < 2)
+ 		div = 2;
+@@ -938,6 +965,17 @@ static int xadc_write_raw(struct iio_dev *indio_dev,
+ 		div << XADC_CONF2_DIV_OFFSET);
+ }
+ 
++static int xadc_write_raw(struct iio_dev *indio_dev,
++	struct iio_chan_spec const *chan, int val, int val2, long info)
++{
++	struct xadc *xadc = iio_priv(indio_dev);
++
++	if (info != IIO_CHAN_INFO_SAMP_FREQ)
++		return -EINVAL;
++
++	return xadc_write_samplerate(xadc, val);
++}
++
+ static const struct iio_event_spec xadc_temp_events[] = {
+ 	{
+ 		.type = IIO_EV_TYPE_THRESH,
+@@ -1223,6 +1261,21 @@ static int xadc_probe(struct platform_device *pdev)
+ 	if (ret)
+ 		goto err_free_samplerate_trigger;
+ 
++	/*
++	 * Make sure not to exceed the maximum samplerate since otherwise the
++	 * resulting interrupt storm will soft-lock the system.
++	 */
++	if (xadc->ops->flags & XADC_FLAGS_BUFFERED) {
++		ret = xadc_read_samplerate(xadc);
++		if (ret < 0)
++			goto err_free_samplerate_trigger;
++		if (ret > XADC_MAX_SAMPLERATE) {
++			ret = xadc_write_samplerate(xadc, XADC_MAX_SAMPLERATE);
++			if (ret < 0)
++				goto err_free_samplerate_trigger;
++		}
++	}
++
+ 	ret = request_irq(xadc->irq, xadc->ops->interrupt_handler, 0,
+ 			dev_name(&pdev->dev), indio_dev);
+ 	if (ret)
+diff --git a/drivers/iio/common/st_sensors/st_sensors_core.c b/drivers/iio/common/st_sensors/st_sensors_core.c
+index 0e35ff06f9af..13bdfbbf5f71 100644
+--- a/drivers/iio/common/st_sensors/st_sensors_core.c
++++ b/drivers/iio/common/st_sensors/st_sensors_core.c
+@@ -79,7 +79,7 @@ int st_sensors_set_odr(struct iio_dev *indio_dev, unsigned int odr)
+ 	struct st_sensor_odr_avl odr_out = {0, 0};
+ 	struct st_sensor_data *sdata = iio_priv(indio_dev);
+ 
+-	if (!sdata->sensor_settings->odr.addr)
++	if (!sdata->sensor_settings->odr.mask)
+ 		return 0;
+ 
+ 	err = st_sensors_match_odr(sdata->sensor_settings, odr, &odr_out);
+diff --git a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c
+index 84d219ae6aee..4426524b59f2 100644
+--- a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c
++++ b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_core.c
+@@ -2036,11 +2036,21 @@ static int st_lsm6dsx_init_hw_timer(struct st_lsm6dsx_hw *hw)
+ 	return 0;
+ }
+ 
+-static int st_lsm6dsx_init_device(struct st_lsm6dsx_hw *hw)
++static int st_lsm6dsx_reset_device(struct st_lsm6dsx_hw *hw)
+ {
+ 	const struct st_lsm6dsx_reg *reg;
+ 	int err;
+ 
++	/*
++	 * flush hw FIFO before device reset in order to avoid
++	 * possible races on interrupt line 1. If the first interrupt
++	 * line is asserted during hw reset the device will work in
++	 * I3C-only mode (if it is supported)
++	 */
++	err = st_lsm6dsx_flush_fifo(hw);
++	if (err < 0 && err != -ENOTSUPP)
++		return err;
++
+ 	/* device sw reset */
+ 	reg = &hw->settings->reset;
+ 	err = regmap_update_bits(hw->regmap, reg->addr, reg->mask,
+@@ -2059,6 +2069,18 @@ static int st_lsm6dsx_init_device(struct st_lsm6dsx_hw *hw)
+ 
+ 	msleep(50);
+ 
++	return 0;
++}
++
++static int st_lsm6dsx_init_device(struct st_lsm6dsx_hw *hw)
++{
++	const struct st_lsm6dsx_reg *reg;
++	int err;
++
++	err = st_lsm6dsx_reset_device(hw);
++	if (err < 0)
++		return err;
++
+ 	/* enable Block Data Update */
+ 	reg = &hw->settings->bdu;
+ 	err = regmap_update_bits(hw->regmap, reg->addr, reg->mask,
+diff --git a/drivers/misc/mei/pci-me.c b/drivers/misc/mei/pci-me.c
+index 90ee4484a80a..2eb7b2968e5d 100644
+--- a/drivers/misc/mei/pci-me.c
++++ b/drivers/misc/mei/pci-me.c
+@@ -212,11 +212,12 @@ static int mei_me_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 	}
+ 	hw = to_me_hw(dev);
+ 	hw->mem_addr = pcim_iomap_table(pdev)[0];
+-	hw->irq = pdev->irq;
+ 	hw->read_fws = mei_me_read_fws;
+ 
+ 	pci_enable_msi(pdev);
+ 
++	hw->irq = pdev->irq;
++
+ 	 /* request and enable interrupt */
+ 	irqflags = pci_dev_msi_enabled(pdev) ? IRQF_ONESHOT : IRQF_SHARED;
+ 
+diff --git a/drivers/net/dsa/b53/b53_common.c b/drivers/net/dsa/b53/b53_common.c
+index 1a69286daa8d..d93de7096ae0 100644
+--- a/drivers/net/dsa/b53/b53_common.c
++++ b/drivers/net/dsa/b53/b53_common.c
+@@ -1454,6 +1454,10 @@ static int b53_arl_rw_op(struct b53_device *dev, unsigned int op)
+ 		reg |= ARLTBL_RW;
+ 	else
+ 		reg &= ~ARLTBL_RW;
++	if (dev->vlan_enabled)
++		reg &= ~ARLTBL_IVL_SVL_SELECT;
++	else
++		reg |= ARLTBL_IVL_SVL_SELECT;
+ 	b53_write8(dev, B53_ARLIO_PAGE, B53_ARLTBL_RW_CTRL, reg);
+ 
+ 	return b53_arl_op_wait(dev);
+@@ -1463,6 +1467,7 @@ static int b53_arl_read(struct b53_device *dev, u64 mac,
+ 			u16 vid, struct b53_arl_entry *ent, u8 *idx,
+ 			bool is_valid)
+ {
++	DECLARE_BITMAP(free_bins, B53_ARLTBL_MAX_BIN_ENTRIES);
+ 	unsigned int i;
+ 	int ret;
+ 
+@@ -1470,6 +1475,8 @@ static int b53_arl_read(struct b53_device *dev, u64 mac,
+ 	if (ret)
+ 		return ret;
+ 
++	bitmap_zero(free_bins, dev->num_arl_entries);
++
+ 	/* Read the bins */
+ 	for (i = 0; i < dev->num_arl_entries; i++) {
+ 		u64 mac_vid;
+@@ -1481,13 +1488,24 @@ static int b53_arl_read(struct b53_device *dev, u64 mac,
+ 			   B53_ARLTBL_DATA_ENTRY(i), &fwd_entry);
+ 		b53_arl_to_entry(ent, mac_vid, fwd_entry);
+ 
+-		if (!(fwd_entry & ARLTBL_VALID))
++		if (!(fwd_entry & ARLTBL_VALID)) {
++			set_bit(i, free_bins);
+ 			continue;
++		}
+ 		if ((mac_vid & ARLTBL_MAC_MASK) != mac)
+ 			continue;
++		if (dev->vlan_enabled &&
++		    ((mac_vid >> ARLTBL_VID_S) & ARLTBL_VID_MASK) != vid)
++			continue;
+ 		*idx = i;
++		return 0;
+ 	}
+ 
++	if (bitmap_weight(free_bins, dev->num_arl_entries) == 0)
++		return -ENOSPC;
++
++	*idx = find_first_bit(free_bins, dev->num_arl_entries);
++
+ 	return -ENOENT;
+ }
+ 
+@@ -1517,10 +1535,21 @@ static int b53_arl_op(struct b53_device *dev, int op, int port,
+ 	if (op)
+ 		return ret;
+ 
+-	/* We could not find a matching MAC, so reset to a new entry */
+-	if (ret) {
++	switch (ret) {
++	case -ENOSPC:
++		dev_dbg(dev->dev, "{%pM,%.4d} no space left in ARL\n",
++			addr, vid);
++		return is_valid ? ret : 0;
++	case -ENOENT:
++		/* We could not find a matching MAC, so reset to a new entry */
++		dev_dbg(dev->dev, "{%pM,%.4d} not found, using idx: %d\n",
++			addr, vid, idx);
+ 		fwd_entry = 0;
+-		idx = 1;
++		break;
++	default:
++		dev_dbg(dev->dev, "{%pM,%.4d} found, using idx: %d\n",
++			addr, vid, idx);
++		break;
+ 	}
+ 
+ 	/* For multicast address, the port is a bitmask and the validity
+@@ -1538,7 +1567,6 @@ static int b53_arl_op(struct b53_device *dev, int op, int port,
+ 		ent.is_valid = !!(ent.port);
+ 	}
+ 
+-	ent.is_valid = is_valid;
+ 	ent.vid = vid;
+ 	ent.is_static = true;
+ 	ent.is_age = false;
+diff --git a/drivers/net/dsa/b53/b53_regs.h b/drivers/net/dsa/b53/b53_regs.h
+index 2a9f421680aa..c90985c294a2 100644
+--- a/drivers/net/dsa/b53/b53_regs.h
++++ b/drivers/net/dsa/b53/b53_regs.h
+@@ -292,6 +292,7 @@
+ /* ARL Table Read/Write Register (8 bit) */
+ #define B53_ARLTBL_RW_CTRL		0x00
+ #define    ARLTBL_RW			BIT(0)
++#define    ARLTBL_IVL_SVL_SELECT	BIT(6)
+ #define    ARLTBL_START_DONE		BIT(7)
+ 
+ /* MAC Address Index Register (48 bit) */
+@@ -304,7 +305,7 @@
+  *
+  * BCM5325 and BCM5365 share most definitions below
+  */
+-#define B53_ARLTBL_MAC_VID_ENTRY(n)	(0x10 * (n))
++#define B53_ARLTBL_MAC_VID_ENTRY(n)	((0x10 * (n)) + 0x10)
+ #define   ARLTBL_MAC_MASK		0xffffffffffffULL
+ #define   ARLTBL_VID_S			48
+ #define   ARLTBL_VID_MASK_25		0xff
+@@ -316,13 +317,16 @@
+ #define   ARLTBL_VALID_25		BIT(63)
+ 
+ /* ARL Table Data Entry N Registers (32 bit) */
+-#define B53_ARLTBL_DATA_ENTRY(n)	((0x10 * (n)) + 0x08)
++#define B53_ARLTBL_DATA_ENTRY(n)	((0x10 * (n)) + 0x18)
+ #define   ARLTBL_DATA_PORT_ID_MASK	0x1ff
+ #define   ARLTBL_TC(tc)			((3 & tc) << 11)
+ #define   ARLTBL_AGE			BIT(14)
+ #define   ARLTBL_STATIC			BIT(15)
+ #define   ARLTBL_VALID			BIT(16)
+ 
++/* Maximum number of bin entries in the ARL for all switches */
++#define B53_ARLTBL_MAX_BIN_ENTRIES	4
++
+ /* ARL Search Control Register (8 bit) */
+ #define B53_ARL_SRCH_CTL		0x50
+ #define B53_ARL_SRCH_CTL_25		0x20
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+index 1d678bee2cc9..b7c0c20e1325 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+@@ -938,6 +938,8 @@ static void bcmgenet_get_ethtool_stats(struct net_device *dev,
+ 	if (netif_running(dev))
+ 		bcmgenet_update_mib_counters(priv);
+ 
++	dev->netdev_ops->ndo_get_stats(dev);
++
+ 	for (i = 0; i < BCMGENET_STATS_LEN; i++) {
+ 		const struct bcmgenet_stats *s;
+ 		char *p;
+@@ -3142,6 +3144,7 @@ static struct net_device_stats *bcmgenet_get_stats(struct net_device *dev)
+ 	dev->stats.rx_packets = rx_packets;
+ 	dev->stats.rx_errors = rx_errors;
+ 	dev->stats.rx_missed_errors = rx_errors;
++	dev->stats.rx_dropped = rx_dropped;
+ 	return &dev->stats;
+ }
+ 
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cudbg_lib.c b/drivers/net/ethernet/chelsio/cxgb4/cudbg_lib.c
+index 19c11568113a..7b9cd69f9844 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cudbg_lib.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cudbg_lib.c
+@@ -1049,9 +1049,9 @@ static void cudbg_t4_fwcache(struct cudbg_init *pdbg_init,
+ 	}
+ }
+ 
+-static unsigned long cudbg_mem_region_size(struct cudbg_init *pdbg_init,
+-					   struct cudbg_error *cudbg_err,
+-					   u8 mem_type)
++static int cudbg_mem_region_size(struct cudbg_init *pdbg_init,
++				 struct cudbg_error *cudbg_err,
++				 u8 mem_type, unsigned long *region_size)
+ {
+ 	struct adapter *padap = pdbg_init->adap;
+ 	struct cudbg_meminfo mem_info;
+@@ -1060,15 +1060,23 @@ static unsigned long cudbg_mem_region_size(struct cudbg_init *pdbg_init,
+ 
+ 	memset(&mem_info, 0, sizeof(struct cudbg_meminfo));
+ 	rc = cudbg_fill_meminfo(padap, &mem_info);
+-	if (rc)
++	if (rc) {
++		cudbg_err->sys_err = rc;
+ 		return rc;
++	}
+ 
+ 	cudbg_t4_fwcache(pdbg_init, cudbg_err);
+ 	rc = cudbg_meminfo_get_mem_index(padap, &mem_info, mem_type, &mc_idx);
+-	if (rc)
++	if (rc) {
++		cudbg_err->sys_err = rc;
+ 		return rc;
++	}
++
++	if (region_size)
++		*region_size = mem_info.avail[mc_idx].limit -
++			       mem_info.avail[mc_idx].base;
+ 
+-	return mem_info.avail[mc_idx].limit - mem_info.avail[mc_idx].base;
++	return 0;
+ }
+ 
+ static int cudbg_collect_mem_region(struct cudbg_init *pdbg_init,
+@@ -1076,7 +1084,12 @@ static int cudbg_collect_mem_region(struct cudbg_init *pdbg_init,
+ 				    struct cudbg_error *cudbg_err,
+ 				    u8 mem_type)
+ {
+-	unsigned long size = cudbg_mem_region_size(pdbg_init, cudbg_err, mem_type);
++	unsigned long size = 0;
++	int rc;
++
++	rc = cudbg_mem_region_size(pdbg_init, cudbg_err, mem_type, &size);
++	if (rc)
++		return rc;
+ 
+ 	return cudbg_read_fw_mem(pdbg_init, dbg_buff, mem_type, size,
+ 				 cudbg_err);
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_ptp.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_ptp.c
+index af1f40cbccc8..f5bc996ac77d 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_ptp.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_ptp.c
+@@ -311,32 +311,17 @@ static int cxgb4_ptp_adjtime(struct ptp_clock_info *ptp, s64 delta)
+  */
+ static int cxgb4_ptp_gettime(struct ptp_clock_info *ptp, struct timespec64 *ts)
+ {
+-	struct adapter *adapter = (struct adapter *)container_of(ptp,
+-				   struct adapter, ptp_clock_info);
+-	struct fw_ptp_cmd c;
++	struct adapter *adapter = container_of(ptp, struct adapter,
++					       ptp_clock_info);
+ 	u64 ns;
+-	int err;
+-
+-	memset(&c, 0, sizeof(c));
+-	c.op_to_portid = cpu_to_be32(FW_CMD_OP_V(FW_PTP_CMD) |
+-				     FW_CMD_REQUEST_F |
+-				     FW_CMD_READ_F |
+-				     FW_PTP_CMD_PORTID_V(0));
+-	c.retval_len16 = cpu_to_be32(FW_CMD_LEN16_V(sizeof(c) / 16));
+-	c.u.ts.sc = FW_PTP_SC_GET_TIME;
+ 
+-	err = t4_wr_mbox(adapter, adapter->mbox, &c, sizeof(c), &c);
+-	if (err < 0) {
+-		dev_err(adapter->pdev_dev,
+-			"PTP: %s error %d\n", __func__, -err);
+-		return err;
+-	}
++	ns = t4_read_reg(adapter, T5_PORT_REG(0, MAC_PORT_PTP_SUM_LO_A));
++	ns |= (u64)t4_read_reg(adapter,
++			       T5_PORT_REG(0, MAC_PORT_PTP_SUM_HI_A)) << 32;
+ 
+ 	/* convert to timespec*/
+-	ns = be64_to_cpu(c.u.ts.tm);
+ 	*ts = ns_to_timespec64(ns);
+-
+-	return err;
++	return 0;
+ }
+ 
+ /**
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/t4_regs.h b/drivers/net/ethernet/chelsio/cxgb4/t4_regs.h
+index a957a6e4d4c4..b0519c326692 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/t4_regs.h
++++ b/drivers/net/ethernet/chelsio/cxgb4/t4_regs.h
+@@ -1900,6 +1900,9 @@
+ 
+ #define MAC_PORT_CFG2_A 0x818
+ 
++#define MAC_PORT_PTP_SUM_LO_A 0x990
++#define MAC_PORT_PTP_SUM_HI_A 0x994
++
+ #define MPS_CMN_CTL_A	0x9000
+ 
+ #define COUNTPAUSEMCRX_S    5
+diff --git a/drivers/net/ethernet/mellanox/mlx4/en_tx.c b/drivers/net/ethernet/mellanox/mlx4/en_tx.c
+index 4d5ca302c067..a30edb436f4a 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/en_tx.c
++++ b/drivers/net/ethernet/mellanox/mlx4/en_tx.c
+@@ -43,6 +43,7 @@
+ #include <linux/ip.h>
+ #include <linux/ipv6.h>
+ #include <linux/moduleparam.h>
++#include <linux/indirect_call_wrapper.h>
+ 
+ #include "mlx4_en.h"
+ 
+@@ -261,6 +262,10 @@ static void mlx4_en_stamp_wqe(struct mlx4_en_priv *priv,
+ 	}
+ }
+ 
++INDIRECT_CALLABLE_DECLARE(u32 mlx4_en_free_tx_desc(struct mlx4_en_priv *priv,
++						   struct mlx4_en_tx_ring *ring,
++						   int index, u64 timestamp,
++						   int napi_mode));
+ 
+ u32 mlx4_en_free_tx_desc(struct mlx4_en_priv *priv,
+ 			 struct mlx4_en_tx_ring *ring,
+@@ -329,6 +334,11 @@ u32 mlx4_en_free_tx_desc(struct mlx4_en_priv *priv,
+ 	return tx_info->nr_txbb;
+ }
+ 
++INDIRECT_CALLABLE_DECLARE(u32 mlx4_en_recycle_tx_desc(struct mlx4_en_priv *priv,
++						      struct mlx4_en_tx_ring *ring,
++						      int index, u64 timestamp,
++						      int napi_mode));
++
+ u32 mlx4_en_recycle_tx_desc(struct mlx4_en_priv *priv,
+ 			    struct mlx4_en_tx_ring *ring,
+ 			    int index, u64 timestamp,
+@@ -449,7 +459,9 @@ bool mlx4_en_process_tx_cq(struct net_device *dev,
+ 				timestamp = mlx4_en_get_cqe_ts(cqe);
+ 
+ 			/* free next descriptor */
+-			last_nr_txbb = ring->free_tx_desc(
++			last_nr_txbb = INDIRECT_CALL_2(ring->free_tx_desc,
++						       mlx4_en_free_tx_desc,
++						       mlx4_en_recycle_tx_desc,
+ 					priv, ring, ring_index,
+ 					timestamp, napi_budget);
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_actions.c b/drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_actions.c
+index c51b2adfc1e1..2cbfa5cfefab 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_actions.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/core_acl_flex_actions.c
+@@ -316,7 +316,7 @@ struct mlxsw_afa_block *mlxsw_afa_block_create(struct mlxsw_afa *mlxsw_afa)
+ 
+ 	block = kzalloc(sizeof(*block), GFP_KERNEL);
+ 	if (!block)
+-		return NULL;
++		return ERR_PTR(-ENOMEM);
+ 	INIT_LIST_HEAD(&block->resource_list);
+ 	block->afa = mlxsw_afa;
+ 
+@@ -344,7 +344,7 @@ err_second_set_create:
+ 	mlxsw_afa_set_destroy(block->first_set);
+ err_first_set_create:
+ 	kfree(block);
+-	return NULL;
++	return ERR_PTR(-ENOMEM);
+ }
+ EXPORT_SYMBOL(mlxsw_afa_block_create);
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum2_acl_tcam.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum2_acl_tcam.c
+index 6c66a0f1b79e..ad69913f19c1 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum2_acl_tcam.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum2_acl_tcam.c
+@@ -88,8 +88,8 @@ static int mlxsw_sp2_acl_tcam_init(struct mlxsw_sp *mlxsw_sp, void *priv,
+ 	 * to be written using PEFA register to all indexes for all regions.
+ 	 */
+ 	afa_block = mlxsw_afa_block_create(mlxsw_sp->afa);
+-	if (!afa_block) {
+-		err = -ENOMEM;
++	if (IS_ERR(afa_block)) {
++		err = PTR_ERR(afa_block);
+ 		goto err_afa_block;
+ 	}
+ 	err = mlxsw_afa_block_continue(afa_block);
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl.c
+index 3d3cca596116..d77cdcb5c642 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl.c
+@@ -444,7 +444,7 @@ mlxsw_sp_acl_rulei_create(struct mlxsw_sp_acl *acl,
+ 
+ 	rulei = kzalloc(sizeof(*rulei), GFP_KERNEL);
+ 	if (!rulei)
+-		return NULL;
++		return ERR_PTR(-ENOMEM);
+ 
+ 	if (afa_block) {
+ 		rulei->act_block = afa_block;
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_mr_tcam.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_mr_tcam.c
+index 346f4a5fe053..221aa6a474eb 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_mr_tcam.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_mr_tcam.c
+@@ -199,8 +199,8 @@ mlxsw_sp_mr_tcam_afa_block_create(struct mlxsw_sp *mlxsw_sp,
+ 	int err;
+ 
+ 	afa_block = mlxsw_afa_block_create(mlxsw_sp->afa);
+-	if (!afa_block)
+-		return ERR_PTR(-ENOMEM);
++	if (IS_ERR(afa_block))
++		return afa_block;
+ 
+ 	err = mlxsw_afa_block_append_allocated_counter(afa_block,
+ 						       counter_index);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-meson8b.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-meson8b.c
+index 0e2fa14f1423..a3934ca6a043 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-meson8b.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-meson8b.c
+@@ -119,6 +119,7 @@ static int meson8b_init_rgmii_tx_clk(struct meson8b_dwmac *dwmac)
+ 		{ .div = 5, .val = 5, },
+ 		{ .div = 6, .val = 6, },
+ 		{ .div = 7, .val = 7, },
++		{ /* end of array */ }
+ 	};
+ 
+ 	clk_configs = devm_kzalloc(dev, sizeof(*clk_configs), GFP_KERNEL);
+diff --git a/drivers/net/ethernet/xscale/ixp4xx_eth.c b/drivers/net/ethernet/xscale/ixp4xx_eth.c
+index 269596c15133..2e5202923510 100644
+--- a/drivers/net/ethernet/xscale/ixp4xx_eth.c
++++ b/drivers/net/ethernet/xscale/ixp4xx_eth.c
+@@ -1387,6 +1387,8 @@ static int ixp4xx_eth_probe(struct platform_device *pdev)
+ 		return -ENODEV;
+ 	regs_phys = res->start;
+ 	port->regs = devm_ioremap_resource(dev, res);
++	if (IS_ERR(port->regs))
++		return PTR_ERR(port->regs);
+ 
+ 	switch (port->id) {
+ 	case IXP4XX_ETH_NPEA:
+diff --git a/drivers/net/geneve.c b/drivers/net/geneve.c
+index 09f279c0182b..6b461be1820b 100644
+--- a/drivers/net/geneve.c
++++ b/drivers/net/geneve.c
+@@ -1207,7 +1207,7 @@ static int geneve_validate(struct nlattr *tb[], struct nlattr *data[],
+ 		enum ifla_geneve_df df = nla_get_u8(data[IFLA_GENEVE_DF]);
+ 
+ 		if (df < 0 || df > GENEVE_DF_MAX) {
+-			NL_SET_ERR_MSG_ATTR(extack, tb[IFLA_GENEVE_DF],
++			NL_SET_ERR_MSG_ATTR(extack, data[IFLA_GENEVE_DF],
+ 					    "Invalid DF attribute");
+ 			return -EINVAL;
+ 		}
+diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c
+index 9b4ae5c36da6..35aa7b0a2aeb 100644
+--- a/drivers/net/macsec.c
++++ b/drivers/net/macsec.c
+@@ -3658,11 +3658,11 @@ static int macsec_newlink(struct net *net, struct net_device *dev,
+ 			  struct netlink_ext_ack *extack)
+ {
+ 	struct macsec_dev *macsec = macsec_priv(dev);
++	rx_handler_func_t *rx_handler;
++	u8 icv_len = DEFAULT_ICV_LEN;
+ 	struct net_device *real_dev;
+-	int err;
++	int err, mtu;
+ 	sci_t sci;
+-	u8 icv_len = DEFAULT_ICV_LEN;
+-	rx_handler_func_t *rx_handler;
+ 
+ 	if (!tb[IFLA_LINK])
+ 		return -EINVAL;
+@@ -3681,7 +3681,11 @@ static int macsec_newlink(struct net *net, struct net_device *dev,
+ 
+ 	if (data && data[IFLA_MACSEC_ICV_LEN])
+ 		icv_len = nla_get_u8(data[IFLA_MACSEC_ICV_LEN]);
+-	dev->mtu = real_dev->mtu - icv_len - macsec_extra_len(true);
++	mtu = real_dev->mtu - icv_len - macsec_extra_len(true);
++	if (mtu < 0)
++		dev->mtu = 0;
++	else
++		dev->mtu = mtu;
+ 
+ 	rx_handler = rtnl_dereference(real_dev->rx_handler);
+ 	if (rx_handler && rx_handler != macsec_handle_frame)
+diff --git a/drivers/net/macvlan.c b/drivers/net/macvlan.c
+index e7289d67268f..0482adc9916b 100644
+--- a/drivers/net/macvlan.c
++++ b/drivers/net/macvlan.c
+@@ -1704,7 +1704,7 @@ static int macvlan_device_event(struct notifier_block *unused,
+ 						struct macvlan_dev,
+ 						list);
+ 
+-		if (macvlan_sync_address(vlan->dev, dev->dev_addr))
++		if (vlan && macvlan_sync_address(vlan->dev, dev->dev_addr))
+ 			return NOTIFY_BAD;
+ 
+ 		break;
+diff --git a/drivers/net/team/team.c b/drivers/net/team/team.c
+index 4004f98e50d9..04845a4017f9 100644
+--- a/drivers/net/team/team.c
++++ b/drivers/net/team/team.c
+@@ -468,6 +468,9 @@ static const struct team_mode *team_mode_get(const char *kind)
+ 	struct team_mode_item *mitem;
+ 	const struct team_mode *mode = NULL;
+ 
++	if (!try_module_get(THIS_MODULE))
++		return NULL;
++
+ 	spin_lock(&mode_list_lock);
+ 	mitem = __find_mode(kind);
+ 	if (!mitem) {
+@@ -483,6 +486,7 @@ static const struct team_mode *team_mode_get(const char *kind)
+ 	}
+ 
+ 	spin_unlock(&mode_list_lock);
++	module_put(THIS_MODULE);
+ 	return mode;
+ }
+ 
+diff --git a/drivers/net/vrf.c b/drivers/net/vrf.c
+index b8228f50bc94..6716deeb35e3 100644
+--- a/drivers/net/vrf.c
++++ b/drivers/net/vrf.c
+@@ -188,8 +188,8 @@ static netdev_tx_t vrf_process_v6_outbound(struct sk_buff *skb,
+ 	fl6.flowi6_proto = iph->nexthdr;
+ 	fl6.flowi6_flags = FLOWI_FLAG_SKIP_NH_OIF;
+ 
+-	dst = ip6_route_output(net, NULL, &fl6);
+-	if (dst == dst_null)
++	dst = ip6_dst_lookup_flow(net, NULL, &fl6, NULL);
++	if (IS_ERR(dst) || dst == dst_null)
+ 		goto err;
+ 
+ 	skb_dst_drop(skb);
+@@ -474,7 +474,8 @@ static struct sk_buff *vrf_ip6_out(struct net_device *vrf_dev,
+ 	if (rt6_need_strict(&ipv6_hdr(skb)->daddr))
+ 		return skb;
+ 
+-	if (qdisc_tx_is_default(vrf_dev))
++	if (qdisc_tx_is_default(vrf_dev) ||
++	    IP6CB(skb)->flags & IP6SKB_XFRM_TRANSFORMED)
+ 		return vrf_ip6_out_direct(vrf_dev, sk, skb);
+ 
+ 	return vrf_ip6_out_redirect(vrf_dev, skb);
+@@ -686,7 +687,8 @@ static struct sk_buff *vrf_ip_out(struct net_device *vrf_dev,
+ 	    ipv4_is_lbcast(ip_hdr(skb)->daddr))
+ 		return skb;
+ 
+-	if (qdisc_tx_is_default(vrf_dev))
++	if (qdisc_tx_is_default(vrf_dev) ||
++	    IPCB(skb)->flags & IPSKB_XFRM_TRANSFORMED)
+ 		return vrf_ip_out_direct(vrf_dev, sk, skb);
+ 
+ 	return vrf_ip_out_redirect(vrf_dev, skb);
+diff --git a/drivers/net/vxlan.c b/drivers/net/vxlan.c
+index 45308b3350cf..a5b415fed11e 100644
+--- a/drivers/net/vxlan.c
++++ b/drivers/net/vxlan.c
+@@ -3144,7 +3144,7 @@ static int vxlan_validate(struct nlattr *tb[], struct nlattr *data[],
+ 		u32 id = nla_get_u32(data[IFLA_VXLAN_ID]);
+ 
+ 		if (id >= VXLAN_N_VID) {
+-			NL_SET_ERR_MSG_ATTR(extack, tb[IFLA_VXLAN_ID],
++			NL_SET_ERR_MSG_ATTR(extack, data[IFLA_VXLAN_ID],
+ 					    "VXLAN ID must be lower than 16777216");
+ 			return -ERANGE;
+ 		}
+@@ -3155,7 +3155,7 @@ static int vxlan_validate(struct nlattr *tb[], struct nlattr *data[],
+ 			= nla_data(data[IFLA_VXLAN_PORT_RANGE]);
+ 
+ 		if (ntohs(p->high) < ntohs(p->low)) {
+-			NL_SET_ERR_MSG_ATTR(extack, tb[IFLA_VXLAN_PORT_RANGE],
++			NL_SET_ERR_MSG_ATTR(extack, data[IFLA_VXLAN_PORT_RANGE],
+ 					    "Invalid source port range");
+ 			return -EINVAL;
+ 		}
+@@ -3165,7 +3165,7 @@ static int vxlan_validate(struct nlattr *tb[], struct nlattr *data[],
+ 		enum ifla_vxlan_df df = nla_get_u8(data[IFLA_VXLAN_DF]);
+ 
+ 		if (df < 0 || df > VXLAN_DF_MAX) {
+-			NL_SET_ERR_MSG_ATTR(extack, tb[IFLA_VXLAN_DF],
++			NL_SET_ERR_MSG_ATTR(extack, data[IFLA_VXLAN_DF],
+ 					    "Invalid DF attribute");
+ 			return -EINVAL;
+ 		}
+diff --git a/drivers/net/wireless/intel/iwlegacy/3945-rs.c b/drivers/net/wireless/intel/iwlegacy/3945-rs.c
+index 6209f85a71dd..0af9e997c9f6 100644
+--- a/drivers/net/wireless/intel/iwlegacy/3945-rs.c
++++ b/drivers/net/wireless/intel/iwlegacy/3945-rs.c
+@@ -374,7 +374,7 @@ out:
+ }
+ 
+ static void *
+-il3945_rs_alloc(struct ieee80211_hw *hw, struct dentry *debugfsdir)
++il3945_rs_alloc(struct ieee80211_hw *hw)
+ {
+ 	return hw->priv;
+ }
+diff --git a/drivers/net/wireless/intel/iwlegacy/4965-rs.c b/drivers/net/wireless/intel/iwlegacy/4965-rs.c
+index 7c6e2c863497..0a02d8aca320 100644
+--- a/drivers/net/wireless/intel/iwlegacy/4965-rs.c
++++ b/drivers/net/wireless/intel/iwlegacy/4965-rs.c
+@@ -2474,7 +2474,7 @@ il4965_rs_fill_link_cmd(struct il_priv *il, struct il_lq_sta *lq_sta,
+ }
+ 
+ static void *
+-il4965_rs_alloc(struct ieee80211_hw *hw, struct dentry *debugfsdir)
++il4965_rs_alloc(struct ieee80211_hw *hw)
+ {
+ 	return hw->priv;
+ }
+diff --git a/drivers/net/wireless/intel/iwlwifi/dvm/rs.c b/drivers/net/wireless/intel/iwlwifi/dvm/rs.c
+index 226165db7dfd..dac809df7f1d 100644
+--- a/drivers/net/wireless/intel/iwlwifi/dvm/rs.c
++++ b/drivers/net/wireless/intel/iwlwifi/dvm/rs.c
+@@ -3019,7 +3019,7 @@ static void rs_fill_link_cmd(struct iwl_priv *priv,
+ 			cpu_to_le16(priv->lib->bt_params->agg_time_limit);
+ }
+ 
+-static void *rs_alloc(struct ieee80211_hw *hw, struct dentry *debugfsdir)
++static void *rs_alloc(struct ieee80211_hw *hw)
+ {
+ 	return hw->priv;
+ }
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/acpi.c b/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
+index ba2aff3af0fe..e3a33388be70 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
++++ b/drivers/net/wireless/intel/iwlwifi/fw/acpi.c
+@@ -296,9 +296,14 @@ int iwl_sar_select_profile(struct iwl_fw_runtime *fwrt,
+ 		if (!prof->enabled) {
+ 			IWL_DEBUG_RADIO(fwrt, "SAR profile %d is disabled.\n",
+ 					profs[i]);
+-			/* if one of the profiles is disabled, we fail all */
+-			return -ENOENT;
++			/*
++			 * if one of the profiles is disabled, we
++			 * ignore all of them and return 1 to
++			 * differentiate disabled from other failures.
++			 */
++			return 1;
+ 		}
++
+ 		IWL_DEBUG_INFO(fwrt,
+ 			       "SAR EWRD: chain %d profile index %d\n",
+ 			       i, profs[i]);
+diff --git a/drivers/net/wireless/intel/iwlwifi/fw/api/txq.h b/drivers/net/wireless/intel/iwlwifi/fw/api/txq.h
+index 73196cbc7fbe..75d958bab0e3 100644
+--- a/drivers/net/wireless/intel/iwlwifi/fw/api/txq.h
++++ b/drivers/net/wireless/intel/iwlwifi/fw/api/txq.h
+@@ -8,7 +8,7 @@
+  * Copyright(c) 2007 - 2014 Intel Corporation. All rights reserved.
+  * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH
+  * Copyright(c) 2016 - 2017 Intel Deutschland GmbH
+- * Copyright(c) 2019 Intel Corporation
++ * Copyright(c) 2019 - 2020 Intel Corporation
+  *
+  * This program is free software; you can redistribute it and/or modify
+  * it under the terms of version 2 of the GNU General Public License as
+@@ -31,7 +31,7 @@
+  * Copyright(c) 2005 - 2014 Intel Corporation. All rights reserved.
+  * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH
+  * Copyright(c) 2016 - 2017 Intel Deutschland GmbH
+- * Copyright(c) 2019 Intel Corporation
++ * Copyright(c) 2019 - 2020 Intel Corporation
+  * All rights reserved.
+  *
+  * Redistribution and use in source and binary forms, with or without
+@@ -99,7 +99,7 @@ enum iwl_mvm_dqa_txq {
+ 	IWL_MVM_DQA_MAX_MGMT_QUEUE = 8,
+ 	IWL_MVM_DQA_AP_PROBE_RESP_QUEUE = 9,
+ 	IWL_MVM_DQA_MIN_DATA_QUEUE = 10,
+-	IWL_MVM_DQA_MAX_DATA_QUEUE = 31,
++	IWL_MVM_DQA_MAX_DATA_QUEUE = 30,
+ };
+ 
+ enum iwl_mvm_tx_fifo {
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c b/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c
+index bab0999f002c..252c2ca1b0ed 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c
+@@ -532,8 +532,7 @@ static struct ieee80211_sband_iftype_data iwl_he_capa[] = {
+ 					IEEE80211_HE_MAC_CAP1_TF_MAC_PAD_DUR_16US |
+ 					IEEE80211_HE_MAC_CAP1_MULTI_TID_AGG_RX_QOS_8,
+ 				.mac_cap_info[2] =
+-					IEEE80211_HE_MAC_CAP2_32BIT_BA_BITMAP |
+-					IEEE80211_HE_MAC_CAP2_ACK_EN,
++					IEEE80211_HE_MAC_CAP2_32BIT_BA_BITMAP,
+ 				.mac_cap_info[3] =
+ 					IEEE80211_HE_MAC_CAP3_OMI_CONTROL |
+ 					IEEE80211_HE_MAC_CAP3_MAX_AMPDU_LEN_EXP_VHT_2,
+@@ -617,8 +616,7 @@ static struct ieee80211_sband_iftype_data iwl_he_capa[] = {
+ 					IEEE80211_HE_MAC_CAP1_TF_MAC_PAD_DUR_16US |
+ 					IEEE80211_HE_MAC_CAP1_MULTI_TID_AGG_RX_QOS_8,
+ 				.mac_cap_info[2] =
+-					IEEE80211_HE_MAC_CAP2_BSR |
+-					IEEE80211_HE_MAC_CAP2_ACK_EN,
++					IEEE80211_HE_MAC_CAP2_BSR,
+ 				.mac_cap_info[3] =
+ 					IEEE80211_HE_MAC_CAP3_OMI_CONTROL |
+ 					IEEE80211_HE_MAC_CAP3_MAX_AMPDU_LEN_EXP_VHT_2,
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
+index 98263cd37944..a8ee79441848 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/fw.c
+@@ -698,6 +698,7 @@ int iwl_mvm_sar_select_profile(struct iwl_mvm *mvm, int prof_a, int prof_b)
+ 		struct iwl_dev_tx_power_cmd_v4 v4;
+ 	} cmd;
+ 
++	int ret;
+ 	u16 len = 0;
+ 
+ 	cmd.v5.v3.set_mode = cpu_to_le32(IWL_TX_POWER_MODE_SET_CHAINS);
+@@ -712,9 +713,14 @@ int iwl_mvm_sar_select_profile(struct iwl_mvm *mvm, int prof_a, int prof_b)
+ 		len = sizeof(cmd.v4.v3);
+ 
+ 
+-	if (iwl_sar_select_profile(&mvm->fwrt, cmd.v5.v3.per_chain_restriction,
+-				   prof_a, prof_b))
+-		return -ENOENT;
++	ret = iwl_sar_select_profile(&mvm->fwrt,
++				     cmd.v5.v3.per_chain_restriction,
++				     prof_a, prof_b);
++
++	/* return on error or if the profile is disabled (positive number) */
++	if (ret)
++		return ret;
++
+ 	IWL_DEBUG_RADIO(mvm, "Sending REDUCE_TX_POWER_CMD per chain\n");
+ 	return iwl_mvm_send_cmd_pdu(mvm, REDUCE_TX_POWER_CMD, 0, len, &cmd);
+ }
+@@ -1005,16 +1011,7 @@ static int iwl_mvm_sar_init(struct iwl_mvm *mvm)
+ 				"EWRD SAR BIOS table invalid or unavailable. (%d)\n",
+ 				ret);
+ 
+-	ret = iwl_mvm_sar_select_profile(mvm, 1, 1);
+-	/*
+-	 * If we don't have profile 0 from BIOS, just skip it.  This
+-	 * means that SAR Geo will not be enabled either, even if we
+-	 * have other valid profiles.
+-	 */
+-	if (ret == -ENOENT)
+-		return 1;
+-
+-	return ret;
++	return iwl_mvm_sar_select_profile(mvm, 1, 1);
+ }
+ 
+ static int iwl_mvm_load_rt_fw(struct iwl_mvm *mvm)
+@@ -1236,7 +1233,7 @@ int iwl_mvm_up(struct iwl_mvm *mvm)
+ 	ret = iwl_mvm_sar_init(mvm);
+ 	if (ret == 0) {
+ 		ret = iwl_mvm_sar_geo_init(mvm);
+-	} else if (ret > 0 && !iwl_sar_get_wgds_table(&mvm->fwrt)) {
++	} else if (ret == -ENOENT && !iwl_sar_get_wgds_table(&mvm->fwrt)) {
+ 		/*
+ 		 * If basic SAR is not available, we check for WGDS,
+ 		 * which should *not* be available either.  If it is
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/rs.c b/drivers/net/wireless/intel/iwlwifi/mvm/rs.c
+index 1a990ed9c3ca..08bef33a1d7e 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/rs.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/rs.c
+@@ -3665,7 +3665,7 @@ static void rs_fill_lq_cmd(struct iwl_mvm *mvm,
+ 			cpu_to_le16(iwl_mvm_coex_agg_time_limit(mvm, sta));
+ }
+ 
+-static void *rs_alloc(struct ieee80211_hw *hw, struct dentry *debugfsdir)
++static void *rs_alloc(struct ieee80211_hw *hw)
+ {
+ 	return hw->priv;
+ }
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/rx.c b/drivers/net/wireless/intel/iwlwifi/mvm/rx.c
+index 5ee33c8ae9d2..77b8def26edb 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/rx.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/rx.c
+@@ -8,7 +8,7 @@
+  * Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved.
+  * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH
+  * Copyright(c) 2016 - 2017 Intel Deutschland GmbH
+- * Copyright(c) 2018 - 2019 Intel Corporation
++ * Copyright(c) 2018 - 2020 Intel Corporation
+  *
+  * This program is free software; you can redistribute it and/or modify
+  * it under the terms of version 2 of the GNU General Public License as
+@@ -31,7 +31,7 @@
+  * Copyright(c) 2012 - 2014 Intel Corporation. All rights reserved.
+  * Copyright(c) 2013 - 2015 Intel Mobile Communications GmbH
+  * Copyright(c) 2016 - 2017 Intel Deutschland GmbH
+- * Copyright(c) 2018 - 2019 Intel Corporation
++ * Copyright(c) 2018 - 2020 Intel Corporation
+  * All rights reserved.
+  *
+  * Redistribution and use in source and binary forms, with or without
+@@ -566,6 +566,7 @@ void iwl_mvm_rx_rx_mpdu(struct iwl_mvm *mvm, struct napi_struct *napi,
+ 
+ struct iwl_mvm_stat_data {
+ 	struct iwl_mvm *mvm;
++	__le32 flags;
+ 	__le32 mac_id;
+ 	u8 beacon_filter_average_energy;
+ 	void *general;
+@@ -606,6 +607,13 @@ static void iwl_mvm_stat_iterator(void *_data, u8 *mac,
+ 			-general->beacon_average_energy[vif_id];
+ 	}
+ 
++	/* make sure that beacon statistics don't go backwards with TCM
++	 * request to clear statistics
++	 */
++	if (le32_to_cpu(data->flags) & IWL_STATISTICS_REPLY_FLG_CLEAR)
++		mvmvif->beacon_stats.accu_num_beacons +=
++			mvmvif->beacon_stats.num_beacons;
++
+ 	if (mvmvif->id != id)
+ 		return;
+ 
+@@ -763,6 +771,7 @@ void iwl_mvm_handle_rx_statistics(struct iwl_mvm *mvm,
+ 
+ 		flags = stats->flag;
+ 	}
++	data.flags = flags;
+ 
+ 	iwl_mvm_rx_stats_check_trigger(mvm, pkt);
+ 
+diff --git a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
+index 64ef3f3ba23b..56ae72debb96 100644
+--- a/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
++++ b/drivers/net/wireless/intel/iwlwifi/mvm/sta.c
+@@ -722,6 +722,11 @@ static int iwl_mvm_find_free_queue(struct iwl_mvm *mvm, u8 sta_id,
+ 
+ 	lockdep_assert_held(&mvm->mutex);
+ 
++	if (WARN(maxq >= mvm->trans->trans_cfg->base_params->num_of_queues,
++		 "max queue %d >= num_of_queues (%d)", maxq,
++		 mvm->trans->trans_cfg->base_params->num_of_queues))
++		maxq = mvm->trans->trans_cfg->base_params->num_of_queues - 1;
++
+ 	/* This should not be hit with new TX path */
+ 	if (WARN_ON(iwl_mvm_has_new_tx_api(mvm)))
+ 		return -ENOSPC;
+@@ -1164,9 +1169,9 @@ static int iwl_mvm_inactivity_check(struct iwl_mvm *mvm, u8 alloc_for_sta)
+ 						   inactive_tid_bitmap,
+ 						   &unshare_queues,
+ 						   &changetid_queues);
+-		if (ret >= 0 && free_queue < 0) {
++		if (ret && free_queue < 0) {
+ 			queue_owner = sta;
+-			free_queue = ret;
++			free_queue = i;
+ 		}
+ 		/* only unlock sta lock - we still need the queue info lock */
+ 		spin_unlock_bh(&mvmsta->lock);
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c b/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c
+index 01f248ba8fec..9d5b1e51b50d 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/ctxt-info-gen3.c
+@@ -129,6 +129,18 @@ int iwl_pcie_ctxt_info_gen3_init(struct iwl_trans *trans,
+ 	int cmdq_size = max_t(u32, IWL_CMD_QUEUE_SIZE,
+ 			      trans->cfg->min_txq_size);
+ 
++	switch (trans_pcie->rx_buf_size) {
++	case IWL_AMSDU_DEF:
++		return -EINVAL;
++	case IWL_AMSDU_2K:
++		break;
++	case IWL_AMSDU_4K:
++	case IWL_AMSDU_8K:
++	case IWL_AMSDU_12K:
++		control_flags |= IWL_PRPH_SCRATCH_RB_SIZE_4K;
++		break;
++	}
++
+ 	/* Allocate prph scratch */
+ 	prph_scratch = dma_alloc_coherent(trans->dev, sizeof(*prph_scratch),
+ 					  &trans_pcie->prph_scratch_dma_addr,
+@@ -143,10 +155,8 @@ int iwl_pcie_ctxt_info_gen3_init(struct iwl_trans *trans,
+ 		cpu_to_le16((u16)iwl_read32(trans, CSR_HW_REV));
+ 	prph_sc_ctrl->version.size = cpu_to_le16(sizeof(*prph_scratch) / 4);
+ 
+-	control_flags = IWL_PRPH_SCRATCH_RB_SIZE_4K |
+-			IWL_PRPH_SCRATCH_MTR_MODE |
+-			(IWL_PRPH_MTR_FORMAT_256B &
+-			 IWL_PRPH_SCRATCH_MTR_FORMAT);
++	control_flags |= IWL_PRPH_SCRATCH_MTR_MODE;
++	control_flags |= IWL_PRPH_MTR_FORMAT_256B & IWL_PRPH_SCRATCH_MTR_FORMAT;
+ 
+ 	/* initialize RX default queue */
+ 	prph_sc_ctrl->rbd_cfg.free_rbd_addr =
+diff --git a/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c b/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c
+index 86fc00167817..9664dbc70ef1 100644
+--- a/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c
++++ b/drivers/net/wireless/intel/iwlwifi/pcie/tx-gen2.c
+@@ -1418,6 +1418,9 @@ void iwl_trans_pcie_dyn_txq_free(struct iwl_trans *trans, int queue)
+ 
+ 	iwl_pcie_gen2_txq_unmap(trans, queue);
+ 
++	iwl_pcie_gen2_txq_free_memory(trans, trans_pcie->txq[queue]);
++	trans_pcie->txq[queue] = NULL;
++
+ 	IWL_DEBUG_TX_QUEUES(trans, "Deactivate queue %d\n", queue);
+ }
+ 
+diff --git a/drivers/net/wireless/realtek/rtlwifi/rc.c b/drivers/net/wireless/realtek/rtlwifi/rc.c
+index 0c7d74902d33..4b5ea0ec9109 100644
+--- a/drivers/net/wireless/realtek/rtlwifi/rc.c
++++ b/drivers/net/wireless/realtek/rtlwifi/rc.c
+@@ -261,7 +261,7 @@ static void rtl_rate_update(void *ppriv,
+ {
+ }
+ 
+-static void *rtl_rate_alloc(struct ieee80211_hw *hw, struct dentry *debugfsdir)
++static void *rtl_rate_alloc(struct ieee80211_hw *hw)
+ {
+ 	struct rtl_priv *rtlpriv = rtl_priv(hw);
+ 	return rtlpriv;
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index a4d8c90ee7cc..652ca87dac94 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -6,6 +6,7 @@
+ 
+ #include <linux/blkdev.h>
+ #include <linux/blk-mq.h>
++#include <linux/compat.h>
+ #include <linux/delay.h>
+ #include <linux/errno.h>
+ #include <linux/hdreg.h>
+@@ -1248,6 +1249,18 @@ static void nvme_enable_aen(struct nvme_ctrl *ctrl)
+ 	queue_work(nvme_wq, &ctrl->async_event_work);
+ }
+ 
++/*
++ * Convert integer values from ioctl structures to user pointers, silently
++ * ignoring the upper bits in the compat case to match behaviour of 32-bit
++ * kernels.
++ */
++static void __user *nvme_to_user_ptr(uintptr_t ptrval)
++{
++	if (in_compat_syscall())
++		ptrval = (compat_uptr_t)ptrval;
++	return (void __user *)ptrval;
++}
++
+ static int nvme_submit_io(struct nvme_ns *ns, struct nvme_user_io __user *uio)
+ {
+ 	struct nvme_user_io io;
+@@ -1271,7 +1284,7 @@ static int nvme_submit_io(struct nvme_ns *ns, struct nvme_user_io __user *uio)
+ 
+ 	length = (io.nblocks + 1) << ns->lba_shift;
+ 	meta_len = (io.nblocks + 1) * ns->ms;
+-	metadata = (void __user *)(uintptr_t)io.metadata;
++	metadata = nvme_to_user_ptr(io.metadata);
+ 
+ 	if (ns->ext) {
+ 		length += meta_len;
+@@ -1294,7 +1307,7 @@ static int nvme_submit_io(struct nvme_ns *ns, struct nvme_user_io __user *uio)
+ 	c.rw.appmask = cpu_to_le16(io.appmask);
+ 
+ 	return nvme_submit_user_cmd(ns->queue, &c,
+-			(void __user *)(uintptr_t)io.addr, length,
++			nvme_to_user_ptr(io.addr), length,
+ 			metadata, meta_len, lower_32_bits(io.slba), NULL, 0);
+ }
+ 
+@@ -1414,9 +1427,9 @@ static int nvme_user_cmd(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
+ 
+ 	effects = nvme_passthru_start(ctrl, ns, cmd.opcode);
+ 	status = nvme_submit_user_cmd(ns ? ns->queue : ctrl->admin_q, &c,
+-			(void __user *)(uintptr_t)cmd.addr, cmd.data_len,
+-			(void __user *)(uintptr_t)cmd.metadata,
+-			cmd.metadata_len, 0, &result, timeout);
++			nvme_to_user_ptr(cmd.addr), cmd.data_len,
++			nvme_to_user_ptr(cmd.metadata), cmd.metadata_len,
++			0, &result, timeout);
+ 	nvme_passthru_end(ctrl, effects);
+ 
+ 	if (status >= 0) {
+@@ -1461,8 +1474,8 @@ static int nvme_user_cmd64(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
+ 
+ 	effects = nvme_passthru_start(ctrl, ns, cmd.opcode);
+ 	status = nvme_submit_user_cmd(ns ? ns->queue : ctrl->admin_q, &c,
+-			(void __user *)(uintptr_t)cmd.addr, cmd.data_len,
+-			(void __user *)(uintptr_t)cmd.metadata, cmd.metadata_len,
++			nvme_to_user_ptr(cmd.addr), cmd.data_len,
++			nvme_to_user_ptr(cmd.metadata), cmd.metadata_len,
+ 			0, &cmd.result, timeout);
+ 	nvme_passthru_end(ctrl, effects);
+ 
+diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
+index a11900cf3a36..906dc0faa48e 100644
+--- a/drivers/nvme/host/multipath.c
++++ b/drivers/nvme/host/multipath.c
+@@ -514,7 +514,7 @@ static int nvme_update_ana_state(struct nvme_ctrl *ctrl,
+ 	if (!nr_nsids)
+ 		return 0;
+ 
+-	down_write(&ctrl->namespaces_rwsem);
++	down_read(&ctrl->namespaces_rwsem);
+ 	list_for_each_entry(ns, &ctrl->namespaces, list) {
+ 		unsigned nsid = le32_to_cpu(desc->nsids[n]);
+ 
+@@ -525,7 +525,7 @@ static int nvme_update_ana_state(struct nvme_ctrl *ctrl,
+ 		if (++n == nr_nsids)
+ 			break;
+ 	}
+-	up_write(&ctrl->namespaces_rwsem);
++	up_read(&ctrl->namespaces_rwsem);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
+index 49d4373b84eb..00e6aa59954d 100644
+--- a/drivers/nvme/host/tcp.c
++++ b/drivers/nvme/host/tcp.c
+@@ -164,16 +164,14 @@ static inline bool nvme_tcp_async_req(struct nvme_tcp_request *req)
+ static inline bool nvme_tcp_has_inline_data(struct nvme_tcp_request *req)
+ {
+ 	struct request *rq;
+-	unsigned int bytes;
+ 
+ 	if (unlikely(nvme_tcp_async_req(req)))
+ 		return false; /* async events don't have a request */
+ 
+ 	rq = blk_mq_rq_from_pdu(req);
+-	bytes = blk_rq_payload_bytes(rq);
+ 
+-	return rq_data_dir(rq) == WRITE && bytes &&
+-		bytes <= nvme_tcp_inline_data_size(req->queue);
++	return rq_data_dir(rq) == WRITE && req->data_len &&
++		req->data_len <= nvme_tcp_inline_data_size(req->queue);
+ }
+ 
+ static inline struct page *nvme_tcp_req_cur_page(struct nvme_tcp_request *req)
+@@ -2090,7 +2088,9 @@ static blk_status_t nvme_tcp_map_data(struct nvme_tcp_queue *queue,
+ 
+ 	c->common.flags |= NVME_CMD_SGL_METABUF;
+ 
+-	if (rq_data_dir(rq) == WRITE && req->data_len &&
++	if (!blk_rq_nr_phys_segments(rq))
++		nvme_tcp_set_sg_null(c);
++	else if (rq_data_dir(rq) == WRITE &&
+ 	    req->data_len <= nvme_tcp_inline_data_size(queue))
+ 		nvme_tcp_set_sg_inline(queue, c, req->data_len);
+ 	else
+@@ -2117,7 +2117,8 @@ static blk_status_t nvme_tcp_setup_cmd_pdu(struct nvme_ns *ns,
+ 	req->data_sent = 0;
+ 	req->pdu_len = 0;
+ 	req->pdu_sent = 0;
+-	req->data_len = blk_rq_payload_bytes(rq);
++	req->data_len = blk_rq_nr_phys_segments(rq) ?
++				blk_rq_payload_bytes(rq) : 0;
+ 	req->curr_bio = rq->bio;
+ 
+ 	if (rq_data_dir(rq) == WRITE &&
+diff --git a/drivers/pwm/pwm-bcm2835.c b/drivers/pwm/pwm-bcm2835.c
+index 91e24f01b54e..d78f86f8e462 100644
+--- a/drivers/pwm/pwm-bcm2835.c
++++ b/drivers/pwm/pwm-bcm2835.c
+@@ -166,6 +166,7 @@ static int bcm2835_pwm_probe(struct platform_device *pdev)
+ 
+ 	pc->chip.dev = &pdev->dev;
+ 	pc->chip.ops = &bcm2835_pwm_ops;
++	pc->chip.base = -1;
+ 	pc->chip.npwm = 2;
+ 	pc->chip.of_xlate = of_pwm_xlate_with_flags;
+ 	pc->chip.of_pwm_n_cells = 3;
+diff --git a/drivers/pwm/pwm-imx27.c b/drivers/pwm/pwm-imx27.c
+index 35a7ac42269c..7e5ed0152977 100644
+--- a/drivers/pwm/pwm-imx27.c
++++ b/drivers/pwm/pwm-imx27.c
+@@ -289,7 +289,7 @@ static int pwm_imx27_apply(struct pwm_chip *chip, struct pwm_device *pwm,
+ 
+ 	writel(cr, imx->mmio_base + MX3_PWMCR);
+ 
+-	if (!state->enabled && cstate.enabled)
++	if (!state->enabled)
+ 		pwm_imx27_clk_disable_unprepare(chip);
+ 
+ 	return 0;
+diff --git a/drivers/pwm/pwm-rcar.c b/drivers/pwm/pwm-rcar.c
+index 2685577b6dd4..7ab9eb6616d9 100644
+--- a/drivers/pwm/pwm-rcar.c
++++ b/drivers/pwm/pwm-rcar.c
+@@ -229,24 +229,28 @@ static int rcar_pwm_probe(struct platform_device *pdev)
+ 	rcar_pwm->chip.base = -1;
+ 	rcar_pwm->chip.npwm = 1;
+ 
++	pm_runtime_enable(&pdev->dev);
++
+ 	ret = pwmchip_add(&rcar_pwm->chip);
+ 	if (ret < 0) {
+ 		dev_err(&pdev->dev, "failed to register PWM chip: %d\n", ret);
++		pm_runtime_disable(&pdev->dev);
+ 		return ret;
+ 	}
+ 
+-	pm_runtime_enable(&pdev->dev);
+-
+ 	return 0;
+ }
+ 
+ static int rcar_pwm_remove(struct platform_device *pdev)
+ {
+ 	struct rcar_pwm_chip *rcar_pwm = platform_get_drvdata(pdev);
++	int ret;
++
++	ret = pwmchip_remove(&rcar_pwm->chip);
+ 
+ 	pm_runtime_disable(&pdev->dev);
+ 
+-	return pwmchip_remove(&rcar_pwm->chip);
++	return ret;
+ }
+ 
+ static const struct of_device_id rcar_pwm_of_table[] = {
+diff --git a/drivers/pwm/pwm-renesas-tpu.c b/drivers/pwm/pwm-renesas-tpu.c
+index 4a855a21b782..8032acc84161 100644
+--- a/drivers/pwm/pwm-renesas-tpu.c
++++ b/drivers/pwm/pwm-renesas-tpu.c
+@@ -415,16 +415,17 @@ static int tpu_probe(struct platform_device *pdev)
+ 	tpu->chip.base = -1;
+ 	tpu->chip.npwm = TPU_CHANNEL_MAX;
+ 
++	pm_runtime_enable(&pdev->dev);
++
+ 	ret = pwmchip_add(&tpu->chip);
+ 	if (ret < 0) {
+ 		dev_err(&pdev->dev, "failed to register PWM chip\n");
++		pm_runtime_disable(&pdev->dev);
+ 		return ret;
+ 	}
+ 
+ 	dev_info(&pdev->dev, "TPU PWM %d registered\n", tpu->pdev->id);
+ 
+-	pm_runtime_enable(&pdev->dev);
+-
+ 	return 0;
+ }
+ 
+@@ -434,12 +435,10 @@ static int tpu_remove(struct platform_device *pdev)
+ 	int ret;
+ 
+ 	ret = pwmchip_remove(&tpu->chip);
+-	if (ret)
+-		return ret;
+ 
+ 	pm_runtime_disable(&pdev->dev);
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ #ifdef CONFIG_OF
+diff --git a/drivers/s390/cio/device.c b/drivers/s390/cio/device.c
+index 0c6245fc7706..983f9c9e08de 100644
+--- a/drivers/s390/cio/device.c
++++ b/drivers/s390/cio/device.c
+@@ -849,8 +849,10 @@ static void io_subchannel_register(struct ccw_device *cdev)
+ 	 * Now we know this subchannel will stay, we can throw
+ 	 * our delayed uevent.
+ 	 */
+-	dev_set_uevent_suppress(&sch->dev, 0);
+-	kobject_uevent(&sch->dev.kobj, KOBJ_ADD);
++	if (dev_get_uevent_suppress(&sch->dev)) {
++		dev_set_uevent_suppress(&sch->dev, 0);
++		kobject_uevent(&sch->dev.kobj, KOBJ_ADD);
++	}
+ 	/* make it known to the system */
+ 	ret = ccw_device_add(cdev);
+ 	if (ret) {
+@@ -1058,8 +1060,11 @@ static int io_subchannel_probe(struct subchannel *sch)
+ 		 * Throw the delayed uevent for the subchannel, register
+ 		 * the ccw_device and exit.
+ 		 */
+-		dev_set_uevent_suppress(&sch->dev, 0);
+-		kobject_uevent(&sch->dev.kobj, KOBJ_ADD);
++		if (dev_get_uevent_suppress(&sch->dev)) {
++			/* should always be the case for the console */
++			dev_set_uevent_suppress(&sch->dev, 0);
++			kobject_uevent(&sch->dev.kobj, KOBJ_ADD);
++		}
+ 		cdev = sch_get_cdev(sch);
+ 		rc = ccw_device_add(cdev);
+ 		if (rc) {
+diff --git a/drivers/s390/cio/vfio_ccw_drv.c b/drivers/s390/cio/vfio_ccw_drv.c
+index e401a3d0aa57..339a6bc0339b 100644
+--- a/drivers/s390/cio/vfio_ccw_drv.c
++++ b/drivers/s390/cio/vfio_ccw_drv.c
+@@ -167,6 +167,11 @@ static int vfio_ccw_sch_probe(struct subchannel *sch)
+ 	if (ret)
+ 		goto out_disable;
+ 
++	if (dev_get_uevent_suppress(&sch->dev)) {
++		dev_set_uevent_suppress(&sch->dev, 0);
++		kobject_uevent(&sch->dev.kobj, KOBJ_ADD);
++	}
++
+ 	VFIO_CCW_MSG_EVENT(4, "bound to subchannel %x.%x.%04x\n",
+ 			   sch->schid.cssid, sch->schid.ssid,
+ 			   sch->schid.sch_no);
+diff --git a/drivers/scsi/libfc/fc_rport.c b/drivers/scsi/libfc/fc_rport.c
+index da6e97d8dc3b..6bb8917b99a1 100644
+--- a/drivers/scsi/libfc/fc_rport.c
++++ b/drivers/scsi/libfc/fc_rport.c
+@@ -1208,9 +1208,15 @@ static void fc_rport_prli_resp(struct fc_seq *sp, struct fc_frame *fp,
+ 		rjt = fc_frame_payload_get(fp, sizeof(*rjt));
+ 		if (!rjt)
+ 			FC_RPORT_DBG(rdata, "PRLI bad response\n");
+-		else
++		else {
+ 			FC_RPORT_DBG(rdata, "PRLI ELS rejected, reason %x expl %x\n",
+ 				     rjt->er_reason, rjt->er_explan);
++			if (rjt->er_reason == ELS_RJT_UNAB &&
++			    rjt->er_explan == ELS_EXPL_PLOGI_REQD) {
++				fc_rport_enter_plogi(rdata);
++				goto out;
++			}
++		}
+ 		fc_rport_error_retry(rdata, FC_EX_ELS_RJT);
+ 	}
+ 
+diff --git a/drivers/scsi/lpfc/lpfc.h b/drivers/scsi/lpfc/lpfc.h
+index 3f2cb17c4574..828873d5b3e8 100644
+--- a/drivers/scsi/lpfc/lpfc.h
++++ b/drivers/scsi/lpfc/lpfc.h
+@@ -482,7 +482,7 @@ struct lpfc_vport {
+ 	struct dentry *debug_nvmestat;
+ 	struct dentry *debug_scsistat;
+ 	struct dentry *debug_nvmektime;
+-	struct dentry *debug_cpucheck;
++	struct dentry *debug_hdwqstat;
+ 	struct dentry *vport_debugfs_root;
+ 	struct lpfc_debugfs_trc *disc_trc;
+ 	atomic_t disc_trc_cnt;
+@@ -1176,12 +1176,11 @@ struct lpfc_hba {
+ 	uint16_t sfp_warning;
+ 
+ #ifdef CONFIG_SCSI_LPFC_DEBUG_FS
+-	uint16_t cpucheck_on;
++	uint16_t hdwqstat_on;
+ #define LPFC_CHECK_OFF		0
+ #define LPFC_CHECK_NVME_IO	1
+-#define LPFC_CHECK_NVMET_RCV	2
+-#define LPFC_CHECK_NVMET_IO	4
+-#define LPFC_CHECK_SCSI_IO	8
++#define LPFC_CHECK_NVMET_IO	2
++#define LPFC_CHECK_SCSI_IO	4
+ 	uint16_t ktime_on;
+ 	uint64_t ktime_data_samples;
+ 	uint64_t ktime_status_samples;
+diff --git a/drivers/scsi/lpfc/lpfc_debugfs.c b/drivers/scsi/lpfc/lpfc_debugfs.c
+index 819335b16c2e..1b8be1006cbe 100644
+--- a/drivers/scsi/lpfc/lpfc_debugfs.c
++++ b/drivers/scsi/lpfc/lpfc_debugfs.c
+@@ -1603,42 +1603,50 @@ out:
+ }
+ 
+ /**
+- * lpfc_debugfs_cpucheck_data - Dump target node list to a buffer
++ * lpfc_debugfs_hdwqstat_data - Dump I/O stats to a buffer
+  * @vport: The vport to gather target node info from.
+  * @buf: The buffer to dump log into.
+  * @size: The maximum amount of data to process.
+  *
+  * Description:
+- * This routine dumps the NVME statistics associated with @vport
++ * This routine dumps the NVME + SCSI statistics associated with @vport
+  *
+  * Return Value:
+  * This routine returns the amount of bytes that were dumped into @buf and will
+  * not exceed @size.
+  **/
+ static int
+-lpfc_debugfs_cpucheck_data(struct lpfc_vport *vport, char *buf, int size)
++lpfc_debugfs_hdwqstat_data(struct lpfc_vport *vport, char *buf, int size)
+ {
+ 	struct lpfc_hba   *phba = vport->phba;
+ 	struct lpfc_sli4_hdw_queue *qp;
+-	int i, j, max_cnt;
+-	int len = 0;
++	struct lpfc_hdwq_stat *c_stat;
++	int i, j, len;
+ 	uint32_t tot_xmt;
+ 	uint32_t tot_rcv;
+ 	uint32_t tot_cmpl;
++	char tmp[LPFC_MAX_SCSI_INFO_TMP_LEN] = {0};
+ 
+-	len += scnprintf(buf + len, PAGE_SIZE - len,
+-			"CPUcheck %s ",
+-			(phba->cpucheck_on & LPFC_CHECK_NVME_IO ?
+-				"Enabled" : "Disabled"));
+-	if (phba->nvmet_support) {
+-		len += scnprintf(buf + len, PAGE_SIZE - len,
+-				"%s\n",
+-				(phba->cpucheck_on & LPFC_CHECK_NVMET_RCV ?
+-					"Rcv Enabled\n" : "Rcv Disabled\n"));
+-	} else {
+-		len += scnprintf(buf + len, PAGE_SIZE - len, "\n");
+-	}
+-	max_cnt = size - LPFC_DEBUG_OUT_LINE_SZ;
++	scnprintf(tmp, sizeof(tmp), "HDWQ Stats:\n\n");
++	if (strlcat(buf, tmp, size) >= size)
++		goto buffer_done;
++
++	scnprintf(tmp, sizeof(tmp), "(NVME Accounting: %s) ",
++		  (phba->hdwqstat_on &
++		  (LPFC_CHECK_NVME_IO | LPFC_CHECK_NVMET_IO) ?
++		  "Enabled" : "Disabled"));
++	if (strlcat(buf, tmp, size) >= size)
++		goto buffer_done;
++
++	scnprintf(tmp, sizeof(tmp), "(SCSI Accounting: %s) ",
++		  (phba->hdwqstat_on & LPFC_CHECK_SCSI_IO ?
++		  "Enabled" : "Disabled"));
++	if (strlcat(buf, tmp, size) >= size)
++		goto buffer_done;
++
++	scnprintf(tmp, sizeof(tmp), "\n\n");
++	if (strlcat(buf, tmp, size) >= size)
++		goto buffer_done;
+ 
+ 	for (i = 0; i < phba->cfg_hdw_queue; i++) {
+ 		qp = &phba->sli4_hba.hdwq[i];
+@@ -1646,46 +1654,76 @@ lpfc_debugfs_cpucheck_data(struct lpfc_vport *vport, char *buf, int size)
+ 		tot_rcv = 0;
+ 		tot_xmt = 0;
+ 		tot_cmpl = 0;
+-		for (j = 0; j < LPFC_CHECK_CPU_CNT; j++) {
+-			tot_xmt += qp->cpucheck_xmt_io[j];
+-			tot_cmpl += qp->cpucheck_cmpl_io[j];
+-			if (phba->nvmet_support)
+-				tot_rcv += qp->cpucheck_rcv_io[j];
+-		}
+ 
+-		/* Only display Hardware Qs with something */
+-		if (!tot_xmt && !tot_cmpl && !tot_rcv)
+-			continue;
++		for_each_present_cpu(j) {
++			c_stat = per_cpu_ptr(phba->sli4_hba.c_stat, j);
++
++			/* Only display for this HDWQ */
++			if (i != c_stat->hdwq_no)
++				continue;
+ 
+-		len += scnprintf(buf + len, PAGE_SIZE - len,
+-				"HDWQ %03d: ", i);
+-		for (j = 0; j < LPFC_CHECK_CPU_CNT; j++) {
+ 			/* Only display non-zero counters */
+-			if (!qp->cpucheck_xmt_io[j] &&
+-			    !qp->cpucheck_cmpl_io[j] &&
+-			    !qp->cpucheck_rcv_io[j])
++			if (!c_stat->xmt_io && !c_stat->cmpl_io &&
++			    !c_stat->rcv_io)
+ 				continue;
++
++			if (!tot_xmt && !tot_cmpl && !tot_rcv) {
++				/* Print HDWQ string only the first time */
++				scnprintf(tmp, sizeof(tmp), "[HDWQ %d]:\t", i);
++				if (strlcat(buf, tmp, size) >= size)
++					goto buffer_done;
++			}
++
++			tot_xmt += c_stat->xmt_io;
++			tot_cmpl += c_stat->cmpl_io;
++			if (phba->nvmet_support)
++				tot_rcv += c_stat->rcv_io;
++
++			scnprintf(tmp, sizeof(tmp), "| [CPU %d]: ", j);
++			if (strlcat(buf, tmp, size) >= size)
++				goto buffer_done;
++
+ 			if (phba->nvmet_support) {
+-				len += scnprintf(buf + len, PAGE_SIZE - len,
+-						"CPU %03d: %x/%x/%x ", j,
+-						qp->cpucheck_rcv_io[j],
+-						qp->cpucheck_xmt_io[j],
+-						qp->cpucheck_cmpl_io[j]);
++				scnprintf(tmp, sizeof(tmp),
++					  "XMT 0x%x CMPL 0x%x RCV 0x%x |",
++					  c_stat->xmt_io, c_stat->cmpl_io,
++					  c_stat->rcv_io);
++				if (strlcat(buf, tmp, size) >= size)
++					goto buffer_done;
+ 			} else {
+-				len += scnprintf(buf + len, PAGE_SIZE - len,
+-						"CPU %03d: %x/%x ", j,
+-						qp->cpucheck_xmt_io[j],
+-						qp->cpucheck_cmpl_io[j]);
++				scnprintf(tmp, sizeof(tmp),
++					  "XMT 0x%x CMPL 0x%x |",
++					  c_stat->xmt_io, c_stat->cmpl_io);
++				if (strlcat(buf, tmp, size) >= size)
++					goto buffer_done;
+ 			}
+ 		}
+-		len += scnprintf(buf + len, PAGE_SIZE - len,
+-				"Total: %x\n", tot_xmt);
+-		if (len >= max_cnt) {
+-			len += scnprintf(buf + len, PAGE_SIZE - len,
+-					"Truncated ...\n");
+-			return len;
++
++		/* Check if nothing to display */
++		if (!tot_xmt && !tot_cmpl && !tot_rcv)
++			continue;
++
++		scnprintf(tmp, sizeof(tmp), "\t->\t[HDWQ Total: ");
++		if (strlcat(buf, tmp, size) >= size)
++			goto buffer_done;
++
++		if (phba->nvmet_support) {
++			scnprintf(tmp, sizeof(tmp),
++				  "XMT 0x%x CMPL 0x%x RCV 0x%x]\n\n",
++				  tot_xmt, tot_cmpl, tot_rcv);
++			if (strlcat(buf, tmp, size) >= size)
++				goto buffer_done;
++		} else {
++			scnprintf(tmp, sizeof(tmp),
++				  "XMT 0x%x CMPL 0x%x]\n\n",
++				  tot_xmt, tot_cmpl);
++			if (strlcat(buf, tmp, size) >= size)
++				goto buffer_done;
+ 		}
+ 	}
++
++buffer_done:
++	len = strnlen(buf, size);
+ 	return len;
+ }
+ 
+@@ -2921,7 +2959,7 @@ lpfc_debugfs_nvmeio_trc_write(struct file *file, const char __user *buf,
+ }
+ 
+ static int
+-lpfc_debugfs_cpucheck_open(struct inode *inode, struct file *file)
++lpfc_debugfs_hdwqstat_open(struct inode *inode, struct file *file)
+ {
+ 	struct lpfc_vport *vport = inode->i_private;
+ 	struct lpfc_debug *debug;
+@@ -2932,14 +2970,14 @@ lpfc_debugfs_cpucheck_open(struct inode *inode, struct file *file)
+ 		goto out;
+ 
+ 	 /* Round to page boundary */
+-	debug->buffer = kmalloc(LPFC_CPUCHECK_SIZE, GFP_KERNEL);
++	debug->buffer = kcalloc(1, LPFC_SCSISTAT_SIZE, GFP_KERNEL);
+ 	if (!debug->buffer) {
+ 		kfree(debug);
+ 		goto out;
+ 	}
+ 
+-	debug->len = lpfc_debugfs_cpucheck_data(vport, debug->buffer,
+-		LPFC_CPUCHECK_SIZE);
++	debug->len = lpfc_debugfs_hdwqstat_data(vport, debug->buffer,
++						LPFC_SCSISTAT_SIZE);
+ 
+ 	debug->i_private = inode->i_private;
+ 	file->private_data = debug;
+@@ -2950,16 +2988,16 @@ out:
+ }
+ 
+ static ssize_t
+-lpfc_debugfs_cpucheck_write(struct file *file, const char __user *buf,
++lpfc_debugfs_hdwqstat_write(struct file *file, const char __user *buf,
+ 			    size_t nbytes, loff_t *ppos)
+ {
+ 	struct lpfc_debug *debug = file->private_data;
+ 	struct lpfc_vport *vport = (struct lpfc_vport *)debug->i_private;
+ 	struct lpfc_hba   *phba = vport->phba;
+-	struct lpfc_sli4_hdw_queue *qp;
++	struct lpfc_hdwq_stat *c_stat;
+ 	char mybuf[64];
+ 	char *pbuf;
+-	int i, j;
++	int i;
+ 
+ 	if (nbytes > 64)
+ 		nbytes = 64;
+@@ -2972,41 +3010,39 @@ lpfc_debugfs_cpucheck_write(struct file *file, const char __user *buf,
+ 
+ 	if ((strncmp(pbuf, "on", sizeof("on") - 1) == 0)) {
+ 		if (phba->nvmet_support)
+-			phba->cpucheck_on |= LPFC_CHECK_NVMET_IO;
++			phba->hdwqstat_on |= LPFC_CHECK_NVMET_IO;
+ 		else
+-			phba->cpucheck_on |= (LPFC_CHECK_NVME_IO |
++			phba->hdwqstat_on |= (LPFC_CHECK_NVME_IO |
+ 				LPFC_CHECK_SCSI_IO);
+ 		return strlen(pbuf);
+ 	} else if ((strncmp(pbuf, "nvme_on", sizeof("nvme_on") - 1) == 0)) {
+ 		if (phba->nvmet_support)
+-			phba->cpucheck_on |= LPFC_CHECK_NVMET_IO;
++			phba->hdwqstat_on |= LPFC_CHECK_NVMET_IO;
+ 		else
+-			phba->cpucheck_on |= LPFC_CHECK_NVME_IO;
++			phba->hdwqstat_on |= LPFC_CHECK_NVME_IO;
+ 		return strlen(pbuf);
+ 	} else if ((strncmp(pbuf, "scsi_on", sizeof("scsi_on") - 1) == 0)) {
+-		phba->cpucheck_on |= LPFC_CHECK_SCSI_IO;
++		if (!phba->nvmet_support)
++			phba->hdwqstat_on |= LPFC_CHECK_SCSI_IO;
+ 		return strlen(pbuf);
+-	} else if ((strncmp(pbuf, "rcv",
+-		   sizeof("rcv") - 1) == 0)) {
+-		if (phba->nvmet_support)
+-			phba->cpucheck_on |= LPFC_CHECK_NVMET_RCV;
+-		else
+-			return -EINVAL;
++	} else if ((strncmp(pbuf, "nvme_off", sizeof("nvme_off") - 1) == 0)) {
++		phba->hdwqstat_on &= ~(LPFC_CHECK_NVME_IO |
++				       LPFC_CHECK_NVMET_IO);
++		return strlen(pbuf);
++	} else if ((strncmp(pbuf, "scsi_off", sizeof("scsi_off") - 1) == 0)) {
++		phba->hdwqstat_on &= ~LPFC_CHECK_SCSI_IO;
+ 		return strlen(pbuf);
+ 	} else if ((strncmp(pbuf, "off",
+ 		   sizeof("off") - 1) == 0)) {
+-		phba->cpucheck_on = LPFC_CHECK_OFF;
++		phba->hdwqstat_on = LPFC_CHECK_OFF;
+ 		return strlen(pbuf);
+ 	} else if ((strncmp(pbuf, "zero",
+ 		   sizeof("zero") - 1) == 0)) {
+-		for (i = 0; i < phba->cfg_hdw_queue; i++) {
+-			qp = &phba->sli4_hba.hdwq[i];
+-
+-			for (j = 0; j < LPFC_CHECK_CPU_CNT; j++) {
+-				qp->cpucheck_rcv_io[j] = 0;
+-				qp->cpucheck_xmt_io[j] = 0;
+-				qp->cpucheck_cmpl_io[j] = 0;
+-			}
++		for_each_present_cpu(i) {
++			c_stat = per_cpu_ptr(phba->sli4_hba.c_stat, i);
++			c_stat->xmt_io = 0;
++			c_stat->cmpl_io = 0;
++			c_stat->rcv_io = 0;
+ 		}
+ 		return strlen(pbuf);
+ 	}
+@@ -5451,13 +5487,13 @@ static const struct file_operations lpfc_debugfs_op_nvmeio_trc = {
+ 	.release =      lpfc_debugfs_release,
+ };
+ 
+-#undef lpfc_debugfs_op_cpucheck
+-static const struct file_operations lpfc_debugfs_op_cpucheck = {
++#undef lpfc_debugfs_op_hdwqstat
++static const struct file_operations lpfc_debugfs_op_hdwqstat = {
+ 	.owner =        THIS_MODULE,
+-	.open =         lpfc_debugfs_cpucheck_open,
++	.open =         lpfc_debugfs_hdwqstat_open,
+ 	.llseek =       lpfc_debugfs_lseek,
+ 	.read =         lpfc_debugfs_read,
+-	.write =	lpfc_debugfs_cpucheck_write,
++	.write =	lpfc_debugfs_hdwqstat_write,
+ 	.release =      lpfc_debugfs_release,
+ };
+ 
+@@ -6081,11 +6117,11 @@ nvmeio_off:
+ 				    vport->vport_debugfs_root,
+ 				    vport, &lpfc_debugfs_op_nvmektime);
+ 
+-	snprintf(name, sizeof(name), "cpucheck");
+-	vport->debug_cpucheck =
++	snprintf(name, sizeof(name), "hdwqstat");
++	vport->debug_hdwqstat =
+ 		debugfs_create_file(name, 0644,
+ 				    vport->vport_debugfs_root,
+-				    vport, &lpfc_debugfs_op_cpucheck);
++				    vport, &lpfc_debugfs_op_hdwqstat);
+ 
+ 	/*
+ 	 * The following section is for additional directories/files for the
+@@ -6219,8 +6255,8 @@ lpfc_debugfs_terminate(struct lpfc_vport *vport)
+ 	debugfs_remove(vport->debug_nvmektime); /* nvmektime */
+ 	vport->debug_nvmektime = NULL;
+ 
+-	debugfs_remove(vport->debug_cpucheck); /* cpucheck */
+-	vport->debug_cpucheck = NULL;
++	debugfs_remove(vport->debug_hdwqstat); /* hdwqstat */
++	vport->debug_hdwqstat = NULL;
+ 
+ 	if (vport->vport_debugfs_root) {
+ 		debugfs_remove(vport->vport_debugfs_root); /* vportX */
+diff --git a/drivers/scsi/lpfc/lpfc_debugfs.h b/drivers/scsi/lpfc/lpfc_debugfs.h
+index 20f2537af511..6643b9bfd4f3 100644
+--- a/drivers/scsi/lpfc/lpfc_debugfs.h
++++ b/drivers/scsi/lpfc/lpfc_debugfs.h
+@@ -47,7 +47,6 @@
+ /* nvmestat output buffer size */
+ #define LPFC_NVMESTAT_SIZE 8192
+ #define LPFC_NVMEKTIME_SIZE 8192
+-#define LPFC_CPUCHECK_SIZE 8192
+ #define LPFC_NVMEIO_TRC_SIZE 8192
+ 
+ /* scsistat output buffer size */
+diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c
+index 5a605773dd0a..48fde2b1ebba 100644
+--- a/drivers/scsi/lpfc/lpfc_init.c
++++ b/drivers/scsi/lpfc/lpfc_init.c
+@@ -6935,6 +6935,17 @@ lpfc_sli4_driver_resource_setup(struct lpfc_hba *phba)
+ 		rc = -ENOMEM;
+ 		goto out_free_hba_cpu_map;
+ 	}
++
++#ifdef CONFIG_SCSI_LPFC_DEBUG_FS
++	phba->sli4_hba.c_stat = alloc_percpu(struct lpfc_hdwq_stat);
++	if (!phba->sli4_hba.c_stat) {
++		lpfc_printf_log(phba, KERN_ERR, LOG_INIT,
++				"3332 Failed allocating per cpu hdwq stats\n");
++		rc = -ENOMEM;
++		goto out_free_hba_eq_info;
++	}
++#endif
++
+ 	/*
+ 	 * Enable sr-iov virtual functions if supported and configured
+ 	 * through the module parameter.
+@@ -6954,6 +6965,10 @@ lpfc_sli4_driver_resource_setup(struct lpfc_hba *phba)
+ 
+ 	return 0;
+ 
++#ifdef CONFIG_SCSI_LPFC_DEBUG_FS
++out_free_hba_eq_info:
++	free_percpu(phba->sli4_hba.eq_info);
++#endif
+ out_free_hba_cpu_map:
+ 	kfree(phba->sli4_hba.cpu_map);
+ out_free_hba_eq_hdl:
+@@ -6992,6 +7007,9 @@ lpfc_sli4_driver_resource_unset(struct lpfc_hba *phba)
+ 	struct lpfc_fcf_conn_entry *conn_entry, *next_conn_entry;
+ 
+ 	free_percpu(phba->sli4_hba.eq_info);
++#ifdef CONFIG_SCSI_LPFC_DEBUG_FS
++	free_percpu(phba->sli4_hba.c_stat);
++#endif
+ 
+ 	/* Free memory allocated for msi-x interrupt vector to CPU mapping */
+ 	kfree(phba->sli4_hba.cpu_map);
+@@ -10831,6 +10849,9 @@ lpfc_cpu_affinity_check(struct lpfc_hba *phba, int vectors)
+ #ifdef CONFIG_X86
+ 	struct cpuinfo_x86 *cpuinfo;
+ #endif
++#ifdef CONFIG_SCSI_LPFC_DEBUG_FS
++	struct lpfc_hdwq_stat *c_stat;
++#endif
+ 
+ 	max_phys_id = 0;
+ 	min_phys_id = LPFC_VECTOR_MAP_EMPTY;
+@@ -11082,10 +11103,17 @@ found_any:
+ 	idx = 0;
+ 	for_each_possible_cpu(cpu) {
+ 		cpup = &phba->sli4_hba.cpu_map[cpu];
++#ifdef CONFIG_SCSI_LPFC_DEBUG_FS
++		c_stat = per_cpu_ptr(phba->sli4_hba.c_stat, cpu);
++		c_stat->hdwq_no = cpup->hdwq;
++#endif
+ 		if (cpup->hdwq != LPFC_VECTOR_MAP_EMPTY)
+ 			continue;
+ 
+ 		cpup->hdwq = idx++ % phba->cfg_hdw_queue;
++#ifdef CONFIG_SCSI_LPFC_DEBUG_FS
++		c_stat->hdwq_no = cpup->hdwq;
++#endif
+ 		lpfc_printf_log(phba, KERN_INFO, LOG_INIT,
+ 				"3340 Set Affinity: not present "
+ 				"CPU %d hdwq %d\n",
+@@ -11175,11 +11203,9 @@ static void lpfc_cpuhp_add(struct lpfc_hba *phba)
+ 
+ 	rcu_read_lock();
+ 
+-	if (!list_empty(&phba->poll_list)) {
+-		timer_setup(&phba->cpuhp_poll_timer, lpfc_sli4_poll_hbtimer, 0);
++	if (!list_empty(&phba->poll_list))
+ 		mod_timer(&phba->cpuhp_poll_timer,
+ 			  jiffies + msecs_to_jiffies(LPFC_POLL_HB));
+-	}
+ 
+ 	rcu_read_unlock();
+ 
+@@ -13145,6 +13171,7 @@ lpfc_pci_probe_one_s4(struct pci_dev *pdev, const struct pci_device_id *pid)
+ 	lpfc_sli4_ras_setup(phba);
+ 
+ 	INIT_LIST_HEAD(&phba->poll_list);
++	timer_setup(&phba->cpuhp_poll_timer, lpfc_sli4_poll_hbtimer, 0);
+ 	cpuhp_state_add_instance_nocalls(lpfc_cpuhp_state, &phba->cpuhp);
+ 
+ 	return 0;
+diff --git a/drivers/scsi/lpfc/lpfc_nvme.c b/drivers/scsi/lpfc/lpfc_nvme.c
+index db4a04a207ec..8403d7ceafe4 100644
+--- a/drivers/scsi/lpfc/lpfc_nvme.c
++++ b/drivers/scsi/lpfc/lpfc_nvme.c
+@@ -382,13 +382,15 @@ lpfc_nvme_remoteport_delete(struct nvme_fc_remote_port *remoteport)
+ 	if (ndlp->upcall_flags & NLP_WAIT_FOR_UNREG) {
+ 		ndlp->nrport = NULL;
+ 		ndlp->upcall_flags &= ~NLP_WAIT_FOR_UNREG;
+-	}
+-	spin_unlock_irq(&vport->phba->hbalock);
++		spin_unlock_irq(&vport->phba->hbalock);
+ 
+-	/* Remove original register reference. The host transport
+-	 * won't reference this rport/remoteport any further.
+-	 */
+-	lpfc_nlp_put(ndlp);
++		/* Remove original register reference. The host transport
++		 * won't reference this rport/remoteport any further.
++		 */
++		lpfc_nlp_put(ndlp);
++	} else {
++		spin_unlock_irq(&vport->phba->hbalock);
++	}
+ 
+  rport_err:
+ 	return;
+@@ -1010,6 +1012,9 @@ lpfc_nvme_io_cmd_wqe_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *pwqeIn,
+ 	uint32_t code, status, idx;
+ 	uint16_t cid, sqhd, data;
+ 	uint32_t *ptr;
++#ifdef CONFIG_SCSI_LPFC_DEBUG_FS
++	int cpu;
++#endif
+ 
+ 	/* Sanity check on return of outstanding command */
+ 	if (!lpfc_ncmd) {
+@@ -1182,19 +1187,15 @@ out_err:
+ 		phba->ktime_last_cmd = lpfc_ncmd->ts_data_nvme;
+ 		lpfc_nvme_ktime(phba, lpfc_ncmd);
+ 	}
+-	if (unlikely(phba->cpucheck_on & LPFC_CHECK_NVME_IO)) {
+-		uint32_t cpu;
+-		idx = lpfc_ncmd->cur_iocbq.hba_wqidx;
++	if (unlikely(phba->hdwqstat_on & LPFC_CHECK_NVME_IO)) {
+ 		cpu = raw_smp_processor_id();
+-		if (cpu < LPFC_CHECK_CPU_CNT) {
+-			if (lpfc_ncmd->cpu != cpu)
+-				lpfc_printf_vlog(vport,
+-						 KERN_INFO, LOG_NVME_IOERR,
+-						 "6701 CPU Check cmpl: "
+-						 "cpu %d expect %d\n",
+-						 cpu, lpfc_ncmd->cpu);
+-			phba->sli4_hba.hdwq[idx].cpucheck_cmpl_io[cpu]++;
+-		}
++		this_cpu_inc(phba->sli4_hba.c_stat->cmpl_io);
++		if (lpfc_ncmd->cpu != cpu)
++			lpfc_printf_vlog(vport,
++					 KERN_INFO, LOG_NVME_IOERR,
++					 "6701 CPU Check cmpl: "
++					 "cpu %d expect %d\n",
++					 cpu, lpfc_ncmd->cpu);
+ 	}
+ #endif
+ 
+@@ -1743,19 +1744,17 @@ lpfc_nvme_fcp_io_submit(struct nvme_fc_local_port *pnvme_lport,
+ 	if (lpfc_ncmd->ts_cmd_start)
+ 		lpfc_ncmd->ts_cmd_wqput = ktime_get_ns();
+ 
+-	if (phba->cpucheck_on & LPFC_CHECK_NVME_IO) {
++	if (phba->hdwqstat_on & LPFC_CHECK_NVME_IO) {
+ 		cpu = raw_smp_processor_id();
+-		if (cpu < LPFC_CHECK_CPU_CNT) {
+-			lpfc_ncmd->cpu = cpu;
+-			if (idx != cpu)
+-				lpfc_printf_vlog(vport,
+-						 KERN_INFO, LOG_NVME_IOERR,
+-						"6702 CPU Check cmd: "
+-						"cpu %d wq %d\n",
+-						lpfc_ncmd->cpu,
+-						lpfc_queue_info->index);
+-			phba->sli4_hba.hdwq[idx].cpucheck_xmt_io[cpu]++;
+-		}
++		this_cpu_inc(phba->sli4_hba.c_stat->xmt_io);
++		lpfc_ncmd->cpu = cpu;
++		if (idx != cpu)
++			lpfc_printf_vlog(vport,
++					 KERN_INFO, LOG_NVME_IOERR,
++					"6702 CPU Check cmd: "
++					"cpu %d wq %d\n",
++					lpfc_ncmd->cpu,
++					lpfc_queue_info->index);
+ 	}
+ #endif
+ 	return 0;
+diff --git a/drivers/scsi/lpfc/lpfc_nvmet.c b/drivers/scsi/lpfc/lpfc_nvmet.c
+index 9dc9afe1c255..f3760a4827d8 100644
+--- a/drivers/scsi/lpfc/lpfc_nvmet.c
++++ b/drivers/scsi/lpfc/lpfc_nvmet.c
+@@ -707,7 +707,7 @@ lpfc_nvmet_xmt_fcp_op_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
+ 	struct lpfc_nvmet_rcv_ctx *ctxp;
+ 	uint32_t status, result, op, start_clean, logerr;
+ #ifdef CONFIG_SCSI_LPFC_DEBUG_FS
+-	uint32_t id;
++	int id;
+ #endif
+ 
+ 	ctxp = cmdwqe->context2;
+@@ -814,16 +814,14 @@ lpfc_nvmet_xmt_fcp_op_cmp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdwqe,
+ 		rsp->done(rsp);
+ 	}
+ #ifdef CONFIG_SCSI_LPFC_DEBUG_FS
+-	if (phba->cpucheck_on & LPFC_CHECK_NVMET_IO) {
++	if (phba->hdwqstat_on & LPFC_CHECK_NVMET_IO) {
+ 		id = raw_smp_processor_id();
+-		if (id < LPFC_CHECK_CPU_CNT) {
+-			if (ctxp->cpu != id)
+-				lpfc_printf_log(phba, KERN_INFO, LOG_NVME_IOERR,
+-						"6704 CPU Check cmdcmpl: "
+-						"cpu %d expect %d\n",
+-						id, ctxp->cpu);
+-			phba->sli4_hba.hdwq[rsp->hwqid].cpucheck_cmpl_io[id]++;
+-		}
++		this_cpu_inc(phba->sli4_hba.c_stat->cmpl_io);
++		if (ctxp->cpu != id)
++			lpfc_printf_log(phba, KERN_INFO, LOG_NVME_IOERR,
++					"6704 CPU Check cmdcmpl: "
++					"cpu %d expect %d\n",
++					id, ctxp->cpu);
+ 	}
+ #endif
+ }
+@@ -931,6 +929,9 @@ lpfc_nvmet_xmt_fcp_op(struct nvmet_fc_target_port *tgtport,
+ 	struct lpfc_sli_ring *pring;
+ 	unsigned long iflags;
+ 	int rc;
++#ifdef CONFIG_SCSI_LPFC_DEBUG_FS
++	int id;
++#endif
+ 
+ 	if (phba->pport->load_flag & FC_UNLOADING) {
+ 		rc = -ENODEV;
+@@ -954,16 +955,14 @@ lpfc_nvmet_xmt_fcp_op(struct nvmet_fc_target_port *tgtport,
+ 	if (!ctxp->hdwq)
+ 		ctxp->hdwq = &phba->sli4_hba.hdwq[rsp->hwqid];
+ 
+-	if (phba->cpucheck_on & LPFC_CHECK_NVMET_IO) {
+-		int id = raw_smp_processor_id();
+-		if (id < LPFC_CHECK_CPU_CNT) {
+-			if (rsp->hwqid != id)
+-				lpfc_printf_log(phba, KERN_INFO, LOG_NVME_IOERR,
+-						"6705 CPU Check OP: "
+-						"cpu %d expect %d\n",
+-						id, rsp->hwqid);
+-			phba->sli4_hba.hdwq[rsp->hwqid].cpucheck_xmt_io[id]++;
+-		}
++	if (phba->hdwqstat_on & LPFC_CHECK_NVMET_IO) {
++		id = raw_smp_processor_id();
++		this_cpu_inc(phba->sli4_hba.c_stat->xmt_io);
++		if (rsp->hwqid != id)
++			lpfc_printf_log(phba, KERN_INFO, LOG_NVME_IOERR,
++					"6705 CPU Check OP: "
++					"cpu %d expect %d\n",
++					id, rsp->hwqid);
+ 		ctxp->cpu = id; /* Setup cpu for cmpl check */
+ 	}
+ #endif
+@@ -2270,15 +2269,13 @@ lpfc_nvmet_unsol_fcp_buffer(struct lpfc_hba *phba,
+ 	size = nvmebuf->bytes_recv;
+ 
+ #ifdef CONFIG_SCSI_LPFC_DEBUG_FS
+-	if (phba->cpucheck_on & LPFC_CHECK_NVMET_RCV) {
+-		if (current_cpu < LPFC_CHECK_CPU_CNT) {
+-			if (idx != current_cpu)
+-				lpfc_printf_log(phba, KERN_INFO, LOG_NVME_IOERR,
+-						"6703 CPU Check rcv: "
+-						"cpu %d expect %d\n",
+-						current_cpu, idx);
+-			phba->sli4_hba.hdwq[idx].cpucheck_rcv_io[current_cpu]++;
+-		}
++	if (phba->hdwqstat_on & LPFC_CHECK_NVMET_IO) {
++		this_cpu_inc(phba->sli4_hba.c_stat->rcv_io);
++		if (idx != current_cpu)
++			lpfc_printf_log(phba, KERN_INFO, LOG_NVME_IOERR,
++					"6703 CPU Check rcv: "
++					"cpu %d expect %d\n",
++					current_cpu, idx);
+ 	}
+ #endif
+ 
+diff --git a/drivers/scsi/lpfc/lpfc_scsi.c b/drivers/scsi/lpfc/lpfc_scsi.c
+index 96ac4a154c58..ed8bcbd043c4 100644
+--- a/drivers/scsi/lpfc/lpfc_scsi.c
++++ b/drivers/scsi/lpfc/lpfc_scsi.c
+@@ -3805,9 +3805,6 @@ lpfc_scsi_cmd_iocb_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *pIocbIn,
+ 	struct Scsi_Host *shost;
+ 	int idx;
+ 	uint32_t logit = LOG_FCP;
+-#ifdef CONFIG_SCSI_LPFC_DEBUG_FS
+-	int cpu;
+-#endif
+ 
+ 	/* Guard against abort handler being called at same time */
+ 	spin_lock(&lpfc_cmd->buf_lock);
+@@ -3826,11 +3823,8 @@ lpfc_scsi_cmd_iocb_cmpl(struct lpfc_hba *phba, struct lpfc_iocbq *pIocbIn,
+ 		phba->sli4_hba.hdwq[idx].scsi_cstat.io_cmpls++;
+ 
+ #ifdef CONFIG_SCSI_LPFC_DEBUG_FS
+-	if (unlikely(phba->cpucheck_on & LPFC_CHECK_SCSI_IO)) {
+-		cpu = raw_smp_processor_id();
+-		if (cpu < LPFC_CHECK_CPU_CNT && phba->sli4_hba.hdwq)
+-			phba->sli4_hba.hdwq[idx].cpucheck_cmpl_io[cpu]++;
+-	}
++	if (unlikely(phba->hdwqstat_on & LPFC_CHECK_SCSI_IO))
++		this_cpu_inc(phba->sli4_hba.c_stat->cmpl_io);
+ #endif
+ 	shost = cmd->device->host;
+ 
+@@ -4503,9 +4497,6 @@ lpfc_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *cmnd)
+ 	struct lpfc_io_buf *lpfc_cmd;
+ 	struct fc_rport *rport = starget_to_rport(scsi_target(cmnd->device));
+ 	int err, idx;
+-#ifdef CONFIG_SCSI_LPFC_DEBUG_FS
+-	int cpu;
+-#endif
+ 
+ 	rdata = lpfc_rport_data_from_scsi_device(cmnd->device);
+ 
+@@ -4626,14 +4617,8 @@ lpfc_queuecommand(struct Scsi_Host *shost, struct scsi_cmnd *cmnd)
+ 	lpfc_scsi_prep_cmnd(vport, lpfc_cmd, ndlp);
+ 
+ #ifdef CONFIG_SCSI_LPFC_DEBUG_FS
+-	if (unlikely(phba->cpucheck_on & LPFC_CHECK_SCSI_IO)) {
+-		cpu = raw_smp_processor_id();
+-		if (cpu < LPFC_CHECK_CPU_CNT) {
+-			struct lpfc_sli4_hdw_queue *hdwq =
+-					&phba->sli4_hba.hdwq[lpfc_cmd->hdwq_no];
+-			hdwq->cpucheck_xmt_io[cpu]++;
+-		}
+-	}
++	if (unlikely(phba->hdwqstat_on & LPFC_CHECK_SCSI_IO))
++		this_cpu_inc(phba->sli4_hba.c_stat->xmt_io);
+ #endif
+ 	err = lpfc_sli_issue_iocb(phba, LPFC_FCP_RING,
+ 				  &lpfc_cmd->cur_iocbq, SLI_IOCB_RET_IOCB);
+diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
+index 64002b0cb02d..396e24764a1b 100644
+--- a/drivers/scsi/lpfc/lpfc_sli.c
++++ b/drivers/scsi/lpfc/lpfc_sli.c
+@@ -2511,6 +2511,8 @@ lpfc_sli_def_mbox_cmpl(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb)
+ 	    !pmb->u.mb.mbxStatus) {
+ 		rpi = pmb->u.mb.un.varWords[0];
+ 		vpi = pmb->u.mb.un.varRegLogin.vpi;
++		if (phba->sli_rev == LPFC_SLI_REV4)
++			vpi -= phba->sli4_hba.max_cfg_param.vpi_base;
+ 		lpfc_unreg_login(phba, vpi, rpi, pmb);
+ 		pmb->vport = vport;
+ 		pmb->mbox_cmpl = lpfc_sli_def_mbox_cmpl;
+@@ -4044,6 +4046,11 @@ lpfc_sli_flush_io_rings(struct lpfc_hba *phba)
+ 	struct lpfc_iocbq *piocb, *next_iocb;
+ 
+ 	spin_lock_irq(&phba->hbalock);
++	if (phba->hba_flag & HBA_IOQ_FLUSH ||
++	    !phba->sli4_hba.hdwq) {
++		spin_unlock_irq(&phba->hbalock);
++		return;
++	}
+ 	/* Indicate the I/O queues are flushed */
+ 	phba->hba_flag |= HBA_IOQ_FLUSH;
+ 	spin_unlock_irq(&phba->hbalock);
+@@ -14450,12 +14457,10 @@ static inline void lpfc_sli4_add_to_poll_list(struct lpfc_queue *eq)
+ {
+ 	struct lpfc_hba *phba = eq->phba;
+ 
+-	if (list_empty(&phba->poll_list)) {
+-		timer_setup(&phba->cpuhp_poll_timer, lpfc_sli4_poll_hbtimer, 0);
+-		/* kickstart slowpath processing for this eq */
++	/* kickstart slowpath processing if needed */
++	if (list_empty(&phba->poll_list))
+ 		mod_timer(&phba->cpuhp_poll_timer,
+ 			  jiffies + msecs_to_jiffies(LPFC_POLL_HB));
+-	}
+ 
+ 	list_add_rcu(&eq->_poll_list, &phba->poll_list);
+ 	synchronize_rcu();
+diff --git a/drivers/scsi/lpfc/lpfc_sli4.h b/drivers/scsi/lpfc/lpfc_sli4.h
+index d963ca871383..8da7429e385a 100644
+--- a/drivers/scsi/lpfc/lpfc_sli4.h
++++ b/drivers/scsi/lpfc/lpfc_sli4.h
+@@ -697,13 +697,6 @@ struct lpfc_sli4_hdw_queue {
+ 	struct lpfc_lock_stat lock_conflict;
+ #endif
+ 
+-#ifdef CONFIG_SCSI_LPFC_DEBUG_FS
+-#define LPFC_CHECK_CPU_CNT    128
+-	uint32_t cpucheck_rcv_io[LPFC_CHECK_CPU_CNT];
+-	uint32_t cpucheck_xmt_io[LPFC_CHECK_CPU_CNT];
+-	uint32_t cpucheck_cmpl_io[LPFC_CHECK_CPU_CNT];
+-#endif
+-
+ 	/* Per HDWQ pool resources */
+ 	struct list_head sgl_list;
+ 	struct list_head cmd_rsp_buf_list;
+@@ -740,6 +733,15 @@ struct lpfc_sli4_hdw_queue {
+ #define lpfc_qp_spin_lock(lock, qp, lstat) spin_lock(lock)
+ #endif
+ 
++#ifdef CONFIG_SCSI_LPFC_DEBUG_FS
++struct lpfc_hdwq_stat {
++	u32 hdwq_no;
++	u32 rcv_io;
++	u32 xmt_io;
++	u32 cmpl_io;
++};
++#endif
++
+ struct lpfc_sli4_hba {
+ 	void __iomem *conf_regs_memmap_p; /* Kernel memory mapped address for
+ 					   * config space registers
+@@ -921,6 +923,9 @@ struct lpfc_sli4_hba {
+ 	struct cpumask numa_mask;
+ 	uint16_t curr_disp_cpu;
+ 	struct lpfc_eq_intr_info __percpu *eq_info;
++#ifdef CONFIG_SCSI_LPFC_DEBUG_FS
++	struct lpfc_hdwq_stat __percpu *c_stat;
++#endif
+ 	uint32_t conf_trunk;
+ #define lpfc_conf_trunk_port0_WORD	conf_trunk
+ #define lpfc_conf_trunk_port0_SHIFT	0
+diff --git a/drivers/scsi/scsi_transport_iscsi.c b/drivers/scsi/scsi_transport_iscsi.c
+index dfc726fa34e3..443ace019852 100644
+--- a/drivers/scsi/scsi_transport_iscsi.c
++++ b/drivers/scsi/scsi_transport_iscsi.c
+@@ -2012,7 +2012,7 @@ static void __iscsi_unbind_session(struct work_struct *work)
+ 	if (session->target_id == ISCSI_MAX_TARGET) {
+ 		spin_unlock_irqrestore(&session->lock, flags);
+ 		mutex_unlock(&ihost->mutex);
+-		return;
++		goto unbind_session_exit;
+ 	}
+ 
+ 	target_id = session->target_id;
+@@ -2024,6 +2024,8 @@ static void __iscsi_unbind_session(struct work_struct *work)
+ 		ida_simple_remove(&iscsi_sess_ida, target_id);
+ 
+ 	scsi_remove_target(&session->dev);
++
++unbind_session_exit:
+ 	iscsi_session_event(session, ISCSI_KEVENT_UNBIND_SESSION);
+ 	ISCSI_DBG_TRANS_SESSION(session, "Completed target removal\n");
+ }
+diff --git a/drivers/staging/comedi/comedi_fops.c b/drivers/staging/comedi/comedi_fops.c
+index 08d1bbbebf2d..e84b4fb493d6 100644
+--- a/drivers/staging/comedi/comedi_fops.c
++++ b/drivers/staging/comedi/comedi_fops.c
+@@ -2725,8 +2725,10 @@ static int comedi_open(struct inode *inode, struct file *file)
+ 	}
+ 
+ 	cfp = kzalloc(sizeof(*cfp), GFP_KERNEL);
+-	if (!cfp)
++	if (!cfp) {
++		comedi_dev_put(dev);
+ 		return -ENOMEM;
++	}
+ 
+ 	cfp->dev = dev;
+ 
+diff --git a/drivers/staging/comedi/drivers/dt2815.c b/drivers/staging/comedi/drivers/dt2815.c
+index 83026ba63d1c..78a7c1b3448a 100644
+--- a/drivers/staging/comedi/drivers/dt2815.c
++++ b/drivers/staging/comedi/drivers/dt2815.c
+@@ -92,6 +92,7 @@ static int dt2815_ao_insn(struct comedi_device *dev, struct comedi_subdevice *s,
+ 	int ret;
+ 
+ 	for (i = 0; i < insn->n; i++) {
++		/* FIXME: lo bit 0 chooses voltage output or current output */
+ 		lo = ((data[i] & 0x0f) << 4) | (chan << 1) | 0x01;
+ 		hi = (data[i] & 0xff0) >> 4;
+ 
+@@ -105,6 +106,8 @@ static int dt2815_ao_insn(struct comedi_device *dev, struct comedi_subdevice *s,
+ 		if (ret)
+ 			return ret;
+ 
++		outb(hi, dev->iobase + DT2815_DATA);
++
+ 		devpriv->ao_readback[chan] = data[i];
+ 	}
+ 	return i;
+diff --git a/drivers/staging/gasket/gasket_sysfs.c b/drivers/staging/gasket/gasket_sysfs.c
+index a2d67c28f530..5f0e089573a2 100644
+--- a/drivers/staging/gasket/gasket_sysfs.c
++++ b/drivers/staging/gasket/gasket_sysfs.c
+@@ -228,8 +228,7 @@ int gasket_sysfs_create_entries(struct device *device,
+ 	}
+ 
+ 	mutex_lock(&mapping->mutex);
+-	for (i = 0; strcmp(attrs[i].attr.attr.name, GASKET_ARRAY_END_MARKER);
+-		i++) {
++	for (i = 0; attrs[i].attr.attr.name != NULL; i++) {
+ 		if (mapping->attribute_count == GASKET_SYSFS_MAX_NODES) {
+ 			dev_err(device,
+ 				"Maximum number of sysfs nodes reached for device\n");
+diff --git a/drivers/staging/gasket/gasket_sysfs.h b/drivers/staging/gasket/gasket_sysfs.h
+index 1d0eed66a7f4..ab5aa351d555 100644
+--- a/drivers/staging/gasket/gasket_sysfs.h
++++ b/drivers/staging/gasket/gasket_sysfs.h
+@@ -30,10 +30,6 @@
+  */
+ #define GASKET_SYSFS_MAX_NODES 196
+ 
+-/* End markers for sysfs struct arrays. */
+-#define GASKET_ARRAY_END_TOKEN GASKET_RESERVED_ARRAY_END
+-#define GASKET_ARRAY_END_MARKER __stringify(GASKET_ARRAY_END_TOKEN)
+-
+ /*
+  * Terminator struct for a gasket_sysfs_attr array. Must be at the end of
+  * all gasket_sysfs_attribute arrays.
+diff --git a/drivers/staging/vt6656/int.c b/drivers/staging/vt6656/int.c
+index af215860be4c..ac563e23868e 100644
+--- a/drivers/staging/vt6656/int.c
++++ b/drivers/staging/vt6656/int.c
+@@ -145,7 +145,8 @@ void vnt_int_process_data(struct vnt_private *priv)
+ 				priv->wake_up_count =
+ 					priv->hw->conf.listen_interval;
+ 
+-			--priv->wake_up_count;
++			if (priv->wake_up_count)
++				--priv->wake_up_count;
+ 
+ 			/* Turn on wake up to listen next beacon */
+ 			if (priv->wake_up_count == 1)
+diff --git a/drivers/staging/vt6656/key.c b/drivers/staging/vt6656/key.c
+index dcd933a6b66e..40c58ac4e209 100644
+--- a/drivers/staging/vt6656/key.c
++++ b/drivers/staging/vt6656/key.c
+@@ -83,9 +83,6 @@ static int vnt_set_keymode(struct ieee80211_hw *hw, u8 *mac_addr,
+ 	case  VNT_KEY_PAIRWISE:
+ 		key_mode |= mode;
+ 		key_inx = 4;
+-		/* Don't save entry for pairwise key for station mode */
+-		if (priv->op_mode == NL80211_IFTYPE_STATION)
+-			clear_bit(entry, &priv->key_entry_inuse);
+ 		break;
+ 	default:
+ 		return -EINVAL;
+@@ -109,7 +106,6 @@ static int vnt_set_keymode(struct ieee80211_hw *hw, u8 *mac_addr,
+ int vnt_set_keys(struct ieee80211_hw *hw, struct ieee80211_sta *sta,
+ 		 struct ieee80211_vif *vif, struct ieee80211_key_conf *key)
+ {
+-	struct ieee80211_bss_conf *conf = &vif->bss_conf;
+ 	struct vnt_private *priv = hw->priv;
+ 	u8 *mac_addr = NULL;
+ 	u8 key_dec_mode = 0;
+@@ -151,16 +147,12 @@ int vnt_set_keys(struct ieee80211_hw *hw, struct ieee80211_sta *sta,
+ 		key->flags |= IEEE80211_KEY_FLAG_GENERATE_IV;
+ 	}
+ 
+-	if (key->flags & IEEE80211_KEY_FLAG_PAIRWISE) {
++	if (key->flags & IEEE80211_KEY_FLAG_PAIRWISE)
+ 		vnt_set_keymode(hw, mac_addr, key, VNT_KEY_PAIRWISE,
+ 				key_dec_mode, true);
+-	} else {
+-		vnt_set_keymode(hw, mac_addr, key, VNT_KEY_DEFAULTKEY,
++	else
++		vnt_set_keymode(hw, mac_addr, key, VNT_KEY_GROUP_ADDRESS,
+ 				key_dec_mode, true);
+ 
+-		vnt_set_keymode(hw, (u8 *)conf->bssid, key,
+-				VNT_KEY_GROUP_ADDRESS, key_dec_mode, true);
+-	}
+-
+ 	return 0;
+ }
+diff --git a/drivers/staging/vt6656/main_usb.c b/drivers/staging/vt6656/main_usb.c
+index 5e48b3ddb94c..1da9905a23b8 100644
+--- a/drivers/staging/vt6656/main_usb.c
++++ b/drivers/staging/vt6656/main_usb.c
+@@ -632,8 +632,6 @@ static int vnt_add_interface(struct ieee80211_hw *hw, struct ieee80211_vif *vif)
+ 
+ 	priv->op_mode = vif->type;
+ 
+-	vnt_set_bss_mode(priv);
+-
+ 	/* LED blink on TX */
+ 	vnt_mac_set_led(priv, LEDSTS_STS, LEDSTS_INTER);
+ 
+@@ -720,7 +718,6 @@ static void vnt_bss_info_changed(struct ieee80211_hw *hw,
+ 		priv->basic_rates = conf->basic_rates;
+ 
+ 		vnt_update_top_rates(priv);
+-		vnt_set_bss_mode(priv);
+ 
+ 		dev_dbg(&priv->usb->dev, "basic rates %x\n", conf->basic_rates);
+ 	}
+@@ -749,11 +746,14 @@ static void vnt_bss_info_changed(struct ieee80211_hw *hw,
+ 			priv->short_slot_time = false;
+ 
+ 		vnt_set_short_slot_time(priv);
+-		vnt_update_ifs(priv);
+ 		vnt_set_vga_gain_offset(priv, priv->bb_vga[0]);
+ 		vnt_update_pre_ed_threshold(priv, false);
+ 	}
+ 
++	if (changed & (BSS_CHANGED_BASIC_RATES | BSS_CHANGED_ERP_PREAMBLE |
++		       BSS_CHANGED_ERP_SLOT))
++		vnt_set_bss_mode(priv);
++
+ 	if (changed & BSS_CHANGED_TXPOWER)
+ 		vnt_rf_setpower(priv, priv->current_rate,
+ 				conf->chandef.chan->hw_value);
+@@ -777,12 +777,15 @@ static void vnt_bss_info_changed(struct ieee80211_hw *hw,
+ 			vnt_mac_reg_bits_on(priv, MAC_REG_TFTCTL,
+ 					    TFTCTL_TSFCNTREN);
+ 
+-			vnt_adjust_tsf(priv, conf->beacon_rate->hw_value,
+-				       conf->sync_tsf, priv->current_tsf);
+-
+ 			vnt_mac_set_beacon_interval(priv, conf->beacon_int);
+ 
+ 			vnt_reset_next_tbtt(priv, conf->beacon_int);
++
++			vnt_adjust_tsf(priv, conf->beacon_rate->hw_value,
++				       conf->sync_tsf, priv->current_tsf);
++
++			vnt_update_next_tbtt(priv,
++					     conf->sync_tsf, conf->beacon_int);
+ 		} else {
+ 			vnt_clear_current_tsf(priv);
+ 
+@@ -817,15 +820,11 @@ static void vnt_configure(struct ieee80211_hw *hw,
+ {
+ 	struct vnt_private *priv = hw->priv;
+ 	u8 rx_mode = 0;
+-	int rc;
+ 
+ 	*total_flags &= FIF_ALLMULTI | FIF_OTHER_BSS | FIF_BCN_PRBRESP_PROMISC;
+ 
+-	rc = vnt_control_in(priv, MESSAGE_TYPE_READ, MAC_REG_RCR,
+-			    MESSAGE_REQUEST_MACREG, sizeof(u8), &rx_mode);
+-
+-	if (!rc)
+-		rx_mode = RCR_MULTICAST | RCR_BROADCAST;
++	vnt_control_in(priv, MESSAGE_TYPE_READ, MAC_REG_RCR,
++		       MESSAGE_REQUEST_MACREG, sizeof(u8), &rx_mode);
+ 
+ 	dev_dbg(&priv->usb->dev, "rx mode in = %x\n", rx_mode);
+ 
+@@ -866,8 +865,12 @@ static int vnt_set_key(struct ieee80211_hw *hw, enum set_key_cmd cmd,
+ 			return -EOPNOTSUPP;
+ 		break;
+ 	case DISABLE_KEY:
+-		if (test_bit(key->hw_key_idx, &priv->key_entry_inuse))
++		if (test_bit(key->hw_key_idx, &priv->key_entry_inuse)) {
+ 			clear_bit(key->hw_key_idx, &priv->key_entry_inuse);
++
++			vnt_mac_disable_keyentry(priv, key->hw_key_idx);
++		}
++
+ 	default:
+ 		break;
+ 	}
+diff --git a/drivers/tty/hvc/hvc_console.c b/drivers/tty/hvc/hvc_console.c
+index 27284a2dcd2b..436cc51c92c3 100644
+--- a/drivers/tty/hvc/hvc_console.c
++++ b/drivers/tty/hvc/hvc_console.c
+@@ -302,10 +302,6 @@ int hvc_instantiate(uint32_t vtermno, int index, const struct hv_ops *ops)
+ 	vtermnos[index] = vtermno;
+ 	cons_ops[index] = ops;
+ 
+-	/* reserve all indices up to and including this index */
+-	if (last_hvc < index)
+-		last_hvc = index;
+-
+ 	/* check if we need to re-register the kernel console */
+ 	hvc_check_console(index);
+ 
+@@ -960,13 +956,22 @@ struct hvc_struct *hvc_alloc(uint32_t vtermno, int data,
+ 		    cons_ops[i] == hp->ops)
+ 			break;
+ 
+-	/* no matching slot, just use a counter */
+-	if (i >= MAX_NR_HVC_CONSOLES)
+-		i = ++last_hvc;
++	if (i >= MAX_NR_HVC_CONSOLES) {
++
++		/* find 'empty' slot for console */
++		for (i = 0; i < MAX_NR_HVC_CONSOLES && vtermnos[i] != -1; i++) {
++		}
++
++		/* no matching slot, just use a counter */
++		if (i == MAX_NR_HVC_CONSOLES)
++			i = ++last_hvc + MAX_NR_HVC_CONSOLES;
++	}
+ 
+ 	hp->index = i;
+-	cons_ops[i] = ops;
+-	vtermnos[i] = vtermno;
++	if (i < MAX_NR_HVC_CONSOLES) {
++		cons_ops[i] = ops;
++		vtermnos[i] = vtermno;
++	}
+ 
+ 	list_add_tail(&(hp->next), &hvc_structs);
+ 	mutex_unlock(&hvc_structs_mutex);
+diff --git a/drivers/tty/rocket.c b/drivers/tty/rocket.c
+index fbaa4ec85560..e2138e7d5dc6 100644
+--- a/drivers/tty/rocket.c
++++ b/drivers/tty/rocket.c
+@@ -632,18 +632,21 @@ init_r_port(int board, int aiop, int chan, struct pci_dev *pci_dev)
+ 	tty_port_init(&info->port);
+ 	info->port.ops = &rocket_port_ops;
+ 	info->flags &= ~ROCKET_MODE_MASK;
+-	switch (pc104[board][line]) {
+-	case 422:
+-		info->flags |= ROCKET_MODE_RS422;
+-		break;
+-	case 485:
+-		info->flags |= ROCKET_MODE_RS485;
+-		break;
+-	case 232:
+-	default:
++	if (board < ARRAY_SIZE(pc104) && line < ARRAY_SIZE(pc104_1))
++		switch (pc104[board][line]) {
++		case 422:
++			info->flags |= ROCKET_MODE_RS422;
++			break;
++		case 485:
++			info->flags |= ROCKET_MODE_RS485;
++			break;
++		case 232:
++		default:
++			info->flags |= ROCKET_MODE_RS232;
++			break;
++		}
++	else
+ 		info->flags |= ROCKET_MODE_RS232;
+-		break;
+-	}
+ 
+ 	info->intmask = RXF_TRIG | TXFIFO_MT | SRC_INT | DELTA_CD | DELTA_CTS | DELTA_DSR;
+ 	if (sInitChan(ctlp, &info->channel, aiop, chan) == 0) {
+diff --git a/drivers/tty/serial/owl-uart.c b/drivers/tty/serial/owl-uart.c
+index 42c8cc93b603..c149f8c30007 100644
+--- a/drivers/tty/serial/owl-uart.c
++++ b/drivers/tty/serial/owl-uart.c
+@@ -680,6 +680,12 @@ static int owl_uart_probe(struct platform_device *pdev)
+ 		return PTR_ERR(owl_port->clk);
+ 	}
+ 
++	ret = clk_prepare_enable(owl_port->clk);
++	if (ret) {
++		dev_err(&pdev->dev, "could not enable clk\n");
++		return ret;
++	}
++
+ 	owl_port->port.dev = &pdev->dev;
+ 	owl_port->port.line = pdev->id;
+ 	owl_port->port.type = PORT_OWL;
+@@ -712,6 +718,7 @@ static int owl_uart_remove(struct platform_device *pdev)
+ 
+ 	uart_remove_one_port(&owl_uart_driver, &owl_port->port);
+ 	owl_uart_ports[pdev->id] = NULL;
++	clk_disable_unprepare(owl_port->clk);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/tty/serial/sh-sci.c b/drivers/tty/serial/sh-sci.c
+index c073aa7001c4..e1179e74a2b8 100644
+--- a/drivers/tty/serial/sh-sci.c
++++ b/drivers/tty/serial/sh-sci.c
+@@ -870,9 +870,16 @@ static void sci_receive_chars(struct uart_port *port)
+ 				tty_insert_flip_char(tport, c, TTY_NORMAL);
+ 		} else {
+ 			for (i = 0; i < count; i++) {
+-				char c = serial_port_in(port, SCxRDR);
+-
+-				status = serial_port_in(port, SCxSR);
++				char c;
++
++				if (port->type == PORT_SCIF ||
++				    port->type == PORT_HSCIF) {
++					status = serial_port_in(port, SCxSR);
++					c = serial_port_in(port, SCxRDR);
++				} else {
++					c = serial_port_in(port, SCxRDR);
++					status = serial_port_in(port, SCxSR);
++				}
+ 				if (uart_handle_sysrq_char(port, c)) {
+ 					count--; i--;
+ 					continue;
+diff --git a/drivers/tty/serial/xilinx_uartps.c b/drivers/tty/serial/xilinx_uartps.c
+index 98db9dc168ff..7a9b360b0438 100644
+--- a/drivers/tty/serial/xilinx_uartps.c
++++ b/drivers/tty/serial/xilinx_uartps.c
+@@ -26,13 +26,15 @@
+ 
+ #define CDNS_UART_TTY_NAME	"ttyPS"
+ #define CDNS_UART_NAME		"xuartps"
++#define CDNS_UART_MAJOR		0	/* use dynamic node allocation */
++#define CDNS_UART_MINOR		0	/* works best with devtmpfs */
++#define CDNS_UART_NR_PORTS	16
+ #define CDNS_UART_FIFO_SIZE	64	/* FIFO size */
+ #define CDNS_UART_REGISTER_SPACE	0x1000
+ #define TX_TIMEOUT		500000
+ 
+ /* Rx Trigger level */
+ static int rx_trigger_level = 56;
+-static int uartps_major;
+ module_param(rx_trigger_level, uint, 0444);
+ MODULE_PARM_DESC(rx_trigger_level, "Rx trigger level, 1-63 bytes");
+ 
+@@ -188,7 +190,6 @@ MODULE_PARM_DESC(rx_timeout, "Rx timeout, 1-255");
+  * @pclk:		APB clock
+  * @cdns_uart_driver:	Pointer to UART driver
+  * @baud:		Current baud rate
+- * @id:			Port ID
+  * @clk_rate_change_nb:	Notifier block for clock changes
+  * @quirks:		Flags for RXBS support.
+  */
+@@ -198,7 +199,6 @@ struct cdns_uart {
+ 	struct clk		*pclk;
+ 	struct uart_driver	*cdns_uart_driver;
+ 	unsigned int		baud;
+-	int			id;
+ 	struct notifier_block	clk_rate_change_nb;
+ 	u32			quirks;
+ 	bool cts_override;
+@@ -1145,6 +1145,8 @@ static const struct uart_ops cdns_uart_ops = {
+ #endif
+ };
+ 
++static struct uart_driver cdns_uart_uart_driver;
++
+ #ifdef CONFIG_SERIAL_XILINX_PS_UART_CONSOLE
+ /**
+  * cdns_uart_console_putchar - write the character to the FIFO buffer
+@@ -1284,6 +1286,16 @@ static int cdns_uart_console_setup(struct console *co, char *options)
+ 
+ 	return uart_set_options(port, co, baud, parity, bits, flow);
+ }
++
++static struct console cdns_uart_console = {
++	.name	= CDNS_UART_TTY_NAME,
++	.write	= cdns_uart_console_write,
++	.device	= uart_console_device,
++	.setup	= cdns_uart_console_setup,
++	.flags	= CON_PRINTBUFFER,
++	.index	= -1, /* Specified on the cmdline (e.g. console=ttyPS ) */
++	.data	= &cdns_uart_uart_driver,
++};
+ #endif /* CONFIG_SERIAL_XILINX_PS_UART_CONSOLE */
+ 
+ #ifdef CONFIG_PM_SLEEP
+@@ -1415,89 +1427,8 @@ static const struct of_device_id cdns_uart_of_match[] = {
+ };
+ MODULE_DEVICE_TABLE(of, cdns_uart_of_match);
+ 
+-/*
+- * Maximum number of instances without alias IDs but if there is alias
+- * which target "< MAX_UART_INSTANCES" range this ID can't be used.
+- */
+-#define MAX_UART_INSTANCES	32
+-
+-/* Stores static aliases list */
+-static DECLARE_BITMAP(alias_bitmap, MAX_UART_INSTANCES);
+-static int alias_bitmap_initialized;
+-
+-/* Stores actual bitmap of allocated IDs with alias IDs together */
+-static DECLARE_BITMAP(bitmap, MAX_UART_INSTANCES);
+-/* Protect bitmap operations to have unique IDs */
+-static DEFINE_MUTEX(bitmap_lock);
+-
+-static int cdns_get_id(struct platform_device *pdev)
+-{
+-	int id, ret;
+-
+-	mutex_lock(&bitmap_lock);
+-
+-	/* Alias list is stable that's why get alias bitmap only once */
+-	if (!alias_bitmap_initialized) {
+-		ret = of_alias_get_alias_list(cdns_uart_of_match, "serial",
+-					      alias_bitmap, MAX_UART_INSTANCES);
+-		if (ret && ret != -EOVERFLOW) {
+-			mutex_unlock(&bitmap_lock);
+-			return ret;
+-		}
+-
+-		alias_bitmap_initialized++;
+-	}
+-
+-	/* Make sure that alias ID is not taken by instance without alias */
+-	bitmap_or(bitmap, bitmap, alias_bitmap, MAX_UART_INSTANCES);
+-
+-	dev_dbg(&pdev->dev, "Alias bitmap: %*pb\n",
+-		MAX_UART_INSTANCES, bitmap);
+-
+-	/* Look for a serialN alias */
+-	id = of_alias_get_id(pdev->dev.of_node, "serial");
+-	if (id < 0) {
+-		dev_warn(&pdev->dev,
+-			 "No serial alias passed. Using the first free id\n");
+-
+-		/*
+-		 * Start with id 0 and check if there is no serial0 alias
+-		 * which points to device which is compatible with this driver.
+-		 * If alias exists then try next free position.
+-		 */
+-		id = 0;
+-
+-		for (;;) {
+-			dev_info(&pdev->dev, "Checking id %d\n", id);
+-			id = find_next_zero_bit(bitmap, MAX_UART_INSTANCES, id);
+-
+-			/* No free empty instance */
+-			if (id == MAX_UART_INSTANCES) {
+-				dev_err(&pdev->dev, "No free ID\n");
+-				mutex_unlock(&bitmap_lock);
+-				return -EINVAL;
+-			}
+-
+-			dev_dbg(&pdev->dev, "The empty id is %d\n", id);
+-			/* Check if ID is empty */
+-			if (!test_and_set_bit(id, bitmap)) {
+-				/* Break the loop if bit is taken */
+-				dev_dbg(&pdev->dev,
+-					"Selected ID %d allocation passed\n",
+-					id);
+-				break;
+-			}
+-			dev_dbg(&pdev->dev,
+-				"Selected ID %d allocation failed\n", id);
+-			/* if taking bit fails then try next one */
+-			id++;
+-		}
+-	}
+-
+-	mutex_unlock(&bitmap_lock);
+-
+-	return id;
+-}
++/* Temporary variable for storing number of instances */
++static int instances;
+ 
+ /**
+  * cdns_uart_probe - Platform driver probe
+@@ -1507,16 +1438,11 @@ static int cdns_get_id(struct platform_device *pdev)
+  */
+ static int cdns_uart_probe(struct platform_device *pdev)
+ {
+-	int rc, irq;
++	int rc, id, irq;
+ 	struct uart_port *port;
+ 	struct resource *res;
+ 	struct cdns_uart *cdns_uart_data;
+ 	const struct of_device_id *match;
+-	struct uart_driver *cdns_uart_uart_driver;
+-	char *driver_name;
+-#ifdef CONFIG_SERIAL_XILINX_PS_UART_CONSOLE
+-	struct console *cdns_uart_console;
+-#endif
+ 
+ 	cdns_uart_data = devm_kzalloc(&pdev->dev, sizeof(*cdns_uart_data),
+ 			GFP_KERNEL);
+@@ -1526,64 +1452,35 @@ static int cdns_uart_probe(struct platform_device *pdev)
+ 	if (!port)
+ 		return -ENOMEM;
+ 
+-	cdns_uart_uart_driver = devm_kzalloc(&pdev->dev,
+-					     sizeof(*cdns_uart_uart_driver),
+-					     GFP_KERNEL);
+-	if (!cdns_uart_uart_driver)
+-		return -ENOMEM;
+-
+-	cdns_uart_data->id = cdns_get_id(pdev);
+-	if (cdns_uart_data->id < 0)
+-		return cdns_uart_data->id;
++	/* Look for a serialN alias */
++	id = of_alias_get_id(pdev->dev.of_node, "serial");
++	if (id < 0)
++		id = 0;
+ 
+-	/* There is a need to use unique driver name */
+-	driver_name = devm_kasprintf(&pdev->dev, GFP_KERNEL, "%s%d",
+-				     CDNS_UART_NAME, cdns_uart_data->id);
+-	if (!driver_name) {
+-		rc = -ENOMEM;
+-		goto err_out_id;
++	if (id >= CDNS_UART_NR_PORTS) {
++		dev_err(&pdev->dev, "Cannot get uart_port structure\n");
++		return -ENODEV;
+ 	}
+ 
+-	cdns_uart_uart_driver->owner = THIS_MODULE;
+-	cdns_uart_uart_driver->driver_name = driver_name;
+-	cdns_uart_uart_driver->dev_name	= CDNS_UART_TTY_NAME;
+-	cdns_uart_uart_driver->major = uartps_major;
+-	cdns_uart_uart_driver->minor = cdns_uart_data->id;
+-	cdns_uart_uart_driver->nr = 1;
+-
++	if (!cdns_uart_uart_driver.state) {
++		cdns_uart_uart_driver.owner = THIS_MODULE;
++		cdns_uart_uart_driver.driver_name = CDNS_UART_NAME;
++		cdns_uart_uart_driver.dev_name = CDNS_UART_TTY_NAME;
++		cdns_uart_uart_driver.major = CDNS_UART_MAJOR;
++		cdns_uart_uart_driver.minor = CDNS_UART_MINOR;
++		cdns_uart_uart_driver.nr = CDNS_UART_NR_PORTS;
+ #ifdef CONFIG_SERIAL_XILINX_PS_UART_CONSOLE
+-	cdns_uart_console = devm_kzalloc(&pdev->dev, sizeof(*cdns_uart_console),
+-					 GFP_KERNEL);
+-	if (!cdns_uart_console) {
+-		rc = -ENOMEM;
+-		goto err_out_id;
+-	}
+-
+-	strncpy(cdns_uart_console->name, CDNS_UART_TTY_NAME,
+-		sizeof(cdns_uart_console->name));
+-	cdns_uart_console->index = cdns_uart_data->id;
+-	cdns_uart_console->write = cdns_uart_console_write;
+-	cdns_uart_console->device = uart_console_device;
+-	cdns_uart_console->setup = cdns_uart_console_setup;
+-	cdns_uart_console->flags = CON_PRINTBUFFER;
+-	cdns_uart_console->data = cdns_uart_uart_driver;
+-	cdns_uart_uart_driver->cons = cdns_uart_console;
++		cdns_uart_uart_driver.cons = &cdns_uart_console;
+ #endif
+ 
+-	rc = uart_register_driver(cdns_uart_uart_driver);
+-	if (rc < 0) {
+-		dev_err(&pdev->dev, "Failed to register driver\n");
+-		goto err_out_id;
++		rc = uart_register_driver(&cdns_uart_uart_driver);
++		if (rc < 0) {
++			dev_err(&pdev->dev, "Failed to register driver\n");
++			return rc;
++		}
+ 	}
+ 
+-	cdns_uart_data->cdns_uart_driver = cdns_uart_uart_driver;
+-
+-	/*
+-	 * Setting up proper name_base needs to be done after uart
+-	 * registration because tty_driver structure is not filled.
+-	 * name_base is 0 by default.
+-	 */
+-	cdns_uart_uart_driver->tty_driver->name_base = cdns_uart_data->id;
++	cdns_uart_data->cdns_uart_driver = &cdns_uart_uart_driver;
+ 
+ 	match = of_match_node(cdns_uart_of_match, pdev->dev.of_node);
+ 	if (match && match->data) {
+@@ -1661,6 +1558,7 @@ static int cdns_uart_probe(struct platform_device *pdev)
+ 	port->ops	= &cdns_uart_ops;
+ 	port->fifosize	= CDNS_UART_FIFO_SIZE;
+ 	port->has_sysrq = IS_ENABLED(CONFIG_SERIAL_XILINX_PS_UART_CONSOLE);
++	port->line	= id;
+ 
+ 	/*
+ 	 * Register the port.
+@@ -1692,7 +1590,7 @@ static int cdns_uart_probe(struct platform_device *pdev)
+ 		console_port = port;
+ #endif
+ 
+-	rc = uart_add_one_port(cdns_uart_uart_driver, port);
++	rc = uart_add_one_port(&cdns_uart_uart_driver, port);
+ 	if (rc) {
+ 		dev_err(&pdev->dev,
+ 			"uart_add_one_port() failed; err=%i\n", rc);
+@@ -1702,13 +1600,15 @@ static int cdns_uart_probe(struct platform_device *pdev)
+ #ifdef CONFIG_SERIAL_XILINX_PS_UART_CONSOLE
+ 	/* This is not port which is used for console that's why clean it up */
+ 	if (console_port == port &&
+-	    !(cdns_uart_uart_driver->cons->flags & CON_ENABLED))
++	    !(cdns_uart_uart_driver.cons->flags & CON_ENABLED))
+ 		console_port = NULL;
+ #endif
+ 
+-	uartps_major = cdns_uart_uart_driver->tty_driver->major;
+ 	cdns_uart_data->cts_override = of_property_read_bool(pdev->dev.of_node,
+ 							     "cts-override");
++
++	instances++;
++
+ 	return 0;
+ 
+ err_out_pm_disable:
+@@ -1724,12 +1624,8 @@ err_out_clk_disable:
+ err_out_clk_dis_pclk:
+ 	clk_disable_unprepare(cdns_uart_data->pclk);
+ err_out_unregister_driver:
+-	uart_unregister_driver(cdns_uart_data->cdns_uart_driver);
+-err_out_id:
+-	mutex_lock(&bitmap_lock);
+-	if (cdns_uart_data->id < MAX_UART_INSTANCES)
+-		clear_bit(cdns_uart_data->id, bitmap);
+-	mutex_unlock(&bitmap_lock);
++	if (!instances)
++		uart_unregister_driver(cdns_uart_data->cdns_uart_driver);
+ 	return rc;
+ }
+ 
+@@ -1752,10 +1648,6 @@ static int cdns_uart_remove(struct platform_device *pdev)
+ #endif
+ 	rc = uart_remove_one_port(cdns_uart_data->cdns_uart_driver, port);
+ 	port->mapbase = 0;
+-	mutex_lock(&bitmap_lock);
+-	if (cdns_uart_data->id < MAX_UART_INSTANCES)
+-		clear_bit(cdns_uart_data->id, bitmap);
+-	mutex_unlock(&bitmap_lock);
+ 	clk_disable_unprepare(cdns_uart_data->uartclk);
+ 	clk_disable_unprepare(cdns_uart_data->pclk);
+ 	pm_runtime_disable(&pdev->dev);
+@@ -1768,13 +1660,8 @@ static int cdns_uart_remove(struct platform_device *pdev)
+ 		console_port = NULL;
+ #endif
+ 
+-	/* If this is last instance major number should be initialized */
+-	mutex_lock(&bitmap_lock);
+-	if (bitmap_empty(bitmap, MAX_UART_INSTANCES))
+-		uartps_major = 0;
+-	mutex_unlock(&bitmap_lock);
+-
+-	uart_unregister_driver(cdns_uart_data->cdns_uart_driver);
++	if (!--instances)
++		uart_unregister_driver(cdns_uart_data->cdns_uart_driver);
+ 	return rc;
+ }
+ 
+diff --git a/drivers/tty/vt/vt.c b/drivers/tty/vt/vt.c
+index b99ac3ebb2b5..cc1a04191365 100644
+--- a/drivers/tty/vt/vt.c
++++ b/drivers/tty/vt/vt.c
+@@ -81,6 +81,7 @@
+ #include <linux/errno.h>
+ #include <linux/kd.h>
+ #include <linux/slab.h>
++#include <linux/vmalloc.h>
+ #include <linux/major.h>
+ #include <linux/mm.h>
+ #include <linux/console.h>
+@@ -350,7 +351,7 @@ static struct uni_screen *vc_uniscr_alloc(unsigned int cols, unsigned int rows)
+ 	/* allocate everything in one go */
+ 	memsize = cols * rows * sizeof(char32_t);
+ 	memsize += rows * sizeof(char32_t *);
+-	p = kmalloc(memsize, GFP_KERNEL);
++	p = vmalloc(memsize);
+ 	if (!p)
+ 		return NULL;
+ 
+@@ -366,7 +367,7 @@ static struct uni_screen *vc_uniscr_alloc(unsigned int cols, unsigned int rows)
+ 
+ static void vc_uniscr_set(struct vc_data *vc, struct uni_screen *new_uniscr)
+ {
+-	kfree(vc->vc_uni_screen);
++	vfree(vc->vc_uni_screen);
+ 	vc->vc_uni_screen = new_uniscr;
+ }
+ 
+@@ -1206,7 +1207,7 @@ static int vc_do_resize(struct tty_struct *tty, struct vc_data *vc,
+ 	if (new_cols == vc->vc_cols && new_rows == vc->vc_rows)
+ 		return 0;
+ 
+-	if (new_screen_size > (4 << 20))
++	if (new_screen_size > KMALLOC_MAX_SIZE)
+ 		return -EINVAL;
+ 	newscreen = kzalloc(new_screen_size, GFP_USER);
+ 	if (!newscreen)
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index 84d6f7df09a4..8ca72d80501d 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -412,9 +412,12 @@ static void acm_ctrl_irq(struct urb *urb)
+ 
+ exit:
+ 	retval = usb_submit_urb(urb, GFP_ATOMIC);
+-	if (retval && retval != -EPERM)
++	if (retval && retval != -EPERM && retval != -ENODEV)
+ 		dev_err(&acm->control->dev,
+ 			"%s - usb_submit_urb failed: %d\n", __func__, retval);
++	else
++		dev_vdbg(&acm->control->dev,
++			"control resubmission terminated %d\n", retval);
+ }
+ 
+ static int acm_submit_read_urb(struct acm *acm, int index, gfp_t mem_flags)
+@@ -430,6 +433,8 @@ static int acm_submit_read_urb(struct acm *acm, int index, gfp_t mem_flags)
+ 			dev_err(&acm->data->dev,
+ 				"urb %d failed submission with %d\n",
+ 				index, res);
++		} else {
++			dev_vdbg(&acm->data->dev, "intended failure %d\n", res);
+ 		}
+ 		set_bit(index, &acm->read_urbs_free);
+ 		return res;
+@@ -471,6 +476,7 @@ static void acm_read_bulk_callback(struct urb *urb)
+ 	int status = urb->status;
+ 	bool stopped = false;
+ 	bool stalled = false;
++	bool cooldown = false;
+ 
+ 	dev_vdbg(&acm->data->dev, "got urb %d, len %d, status %d\n",
+ 		rb->index, urb->actual_length, status);
+@@ -497,6 +503,14 @@ static void acm_read_bulk_callback(struct urb *urb)
+ 			__func__, status);
+ 		stopped = true;
+ 		break;
++	case -EOVERFLOW:
++	case -EPROTO:
++		dev_dbg(&acm->data->dev,
++			"%s - cooling babbling device\n", __func__);
++		usb_mark_last_busy(acm->dev);
++		set_bit(rb->index, &acm->urbs_in_error_delay);
++		cooldown = true;
++		break;
+ 	default:
+ 		dev_dbg(&acm->data->dev,
+ 			"%s - nonzero urb status received: %d\n",
+@@ -518,9 +532,11 @@ static void acm_read_bulk_callback(struct urb *urb)
+ 	 */
+ 	smp_mb__after_atomic();
+ 
+-	if (stopped || stalled) {
++	if (stopped || stalled || cooldown) {
+ 		if (stalled)
+ 			schedule_work(&acm->work);
++		else if (cooldown)
++			schedule_delayed_work(&acm->dwork, HZ / 2);
+ 		return;
+ 	}
+ 
+@@ -557,14 +573,20 @@ static void acm_softint(struct work_struct *work)
+ 	struct acm *acm = container_of(work, struct acm, work);
+ 
+ 	if (test_bit(EVENT_RX_STALL, &acm->flags)) {
+-		if (!(usb_autopm_get_interface(acm->data))) {
++		smp_mb(); /* against acm_suspend() */
++		if (!acm->susp_count) {
+ 			for (i = 0; i < acm->rx_buflimit; i++)
+ 				usb_kill_urb(acm->read_urbs[i]);
+ 			usb_clear_halt(acm->dev, acm->in);
+ 			acm_submit_read_urbs(acm, GFP_KERNEL);
+-			usb_autopm_put_interface(acm->data);
++			clear_bit(EVENT_RX_STALL, &acm->flags);
+ 		}
+-		clear_bit(EVENT_RX_STALL, &acm->flags);
++	}
++
++	if (test_and_clear_bit(ACM_ERROR_DELAY, &acm->flags)) {
++		for (i = 0; i < ACM_NR; i++)
++			if (test_and_clear_bit(i, &acm->urbs_in_error_delay))
++					acm_submit_read_urb(acm, i, GFP_NOIO);
+ 	}
+ 
+ 	if (test_and_clear_bit(EVENT_TTY_WAKEUP, &acm->flags))
+@@ -1333,6 +1355,7 @@ made_compressed_probe:
+ 	acm->readsize = readsize;
+ 	acm->rx_buflimit = num_rx_buf;
+ 	INIT_WORK(&acm->work, acm_softint);
++	INIT_DELAYED_WORK(&acm->dwork, acm_softint);
+ 	init_waitqueue_head(&acm->wioctl);
+ 	spin_lock_init(&acm->write_lock);
+ 	spin_lock_init(&acm->read_lock);
+@@ -1542,6 +1565,7 @@ static void acm_disconnect(struct usb_interface *intf)
+ 
+ 	acm_kill_urbs(acm);
+ 	cancel_work_sync(&acm->work);
++	cancel_delayed_work_sync(&acm->dwork);
+ 
+ 	tty_unregister_device(acm_tty_driver, acm->minor);
+ 
+@@ -1584,6 +1608,8 @@ static int acm_suspend(struct usb_interface *intf, pm_message_t message)
+ 
+ 	acm_kill_urbs(acm);
+ 	cancel_work_sync(&acm->work);
++	cancel_delayed_work_sync(&acm->dwork);
++	acm->urbs_in_error_delay = 0;
+ 
+ 	return 0;
+ }
+diff --git a/drivers/usb/class/cdc-acm.h b/drivers/usb/class/cdc-acm.h
+index ca1c026382c2..cd5e9d8ab237 100644
+--- a/drivers/usb/class/cdc-acm.h
++++ b/drivers/usb/class/cdc-acm.h
+@@ -109,8 +109,11 @@ struct acm {
+ #		define EVENT_TTY_WAKEUP	0
+ #		define EVENT_RX_STALL	1
+ #		define ACM_THROTTLED	2
++#		define ACM_ERROR_DELAY	3
++	unsigned long urbs_in_error_delay;		/* these need to be restarted after a delay */
+ 	struct usb_cdc_line_coding line;		/* bits, stop, parity */
+-	struct work_struct work;			/* work queue entry for line discipline waking up */
++	struct work_struct work;			/* work queue entry for various purposes*/
++	struct delayed_work dwork;			/* for cool downs needed in error recovery */
+ 	unsigned int ctrlin;				/* input control lines (DCD, DSR, RI, break, overruns) */
+ 	unsigned int ctrlout;				/* output control lines (DTR, RTS) */
+ 	struct async_icount iocount;			/* counters for control line changes */
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index 54cd8ef795ec..2b6565c06c23 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -1223,6 +1223,11 @@ static void hub_activate(struct usb_hub *hub, enum hub_activation_type type)
+ #ifdef CONFIG_PM
+ 			udev->reset_resume = 1;
+ #endif
++			/* Don't set the change_bits when the device
++			 * was powered off.
++			 */
++			if (test_bit(port1, hub->power_bits))
++				set_bit(port1, hub->change_bits);
+ 
+ 		} else {
+ 			/* The power session is gone; tell hub_wq */
+@@ -2723,13 +2728,11 @@ static bool use_new_scheme(struct usb_device *udev, int retry,
+ {
+ 	int old_scheme_first_port =
+ 		port_dev->quirks & USB_PORT_QUIRK_OLD_SCHEME;
+-	int quick_enumeration = (udev->speed == USB_SPEED_HIGH);
+ 
+ 	if (udev->speed >= USB_SPEED_SUPER)
+ 		return false;
+ 
+-	return USE_NEW_SCHEME(retry, old_scheme_first_port || old_scheme_first
+-			      || quick_enumeration);
++	return USE_NEW_SCHEME(retry, old_scheme_first_port || old_scheme_first);
+ }
+ 
+ /* Is a USB 3.0 port in the Inactive or Compliance Mode state?
+@@ -3088,6 +3091,15 @@ static int check_port_resume_type(struct usb_device *udev,
+ 		if (portchange & USB_PORT_STAT_C_ENABLE)
+ 			usb_clear_port_feature(hub->hdev, port1,
+ 					USB_PORT_FEAT_C_ENABLE);
++
++		/*
++		 * Whatever made this reset-resume necessary may have
++		 * turned on the port1 bit in hub->change_bits.  But after
++		 * a successful reset-resume we want the bit to be clear;
++		 * if it was on it would indicate that something happened
++		 * following the reset-resume.
++		 */
++		clear_bit(port1, hub->change_bits);
+ 	}
+ 
+ 	return status;
+diff --git a/drivers/usb/core/message.c b/drivers/usb/core/message.c
+index 5adf489428aa..02eaac7e1e34 100644
+--- a/drivers/usb/core/message.c
++++ b/drivers/usb/core/message.c
+@@ -588,12 +588,13 @@ void usb_sg_cancel(struct usb_sg_request *io)
+ 	int i, retval;
+ 
+ 	spin_lock_irqsave(&io->lock, flags);
+-	if (io->status) {
++	if (io->status || io->count == 0) {
+ 		spin_unlock_irqrestore(&io->lock, flags);
+ 		return;
+ 	}
+ 	/* shut everything down */
+ 	io->status = -ECONNRESET;
++	io->count++;		/* Keep the request alive until we're done */
+ 	spin_unlock_irqrestore(&io->lock, flags);
+ 
+ 	for (i = io->entries - 1; i >= 0; --i) {
+@@ -607,6 +608,12 @@ void usb_sg_cancel(struct usb_sg_request *io)
+ 			dev_warn(&io->dev->dev, "%s, unlink --> %d\n",
+ 				 __func__, retval);
+ 	}
++
++	spin_lock_irqsave(&io->lock, flags);
++	io->count--;
++	if (!io->count)
++		complete(&io->complete);
++	spin_unlock_irqrestore(&io->lock, flags);
+ }
+ EXPORT_SYMBOL_GPL(usb_sg_cancel);
+ 
+diff --git a/drivers/usb/core/quirks.c b/drivers/usb/core/quirks.c
+index da30b5664ff3..3e8efe759c3e 100644
+--- a/drivers/usb/core/quirks.c
++++ b/drivers/usb/core/quirks.c
+@@ -430,6 +430,10 @@ static const struct usb_device_id usb_quirk_list[] = {
+ 	/* Corsair K70 LUX */
+ 	{ USB_DEVICE(0x1b1c, 0x1b36), .driver_info = USB_QUIRK_DELAY_INIT },
+ 
++	/* Corsair K70 RGB RAPDIFIRE */
++	{ USB_DEVICE(0x1b1c, 0x1b38), .driver_info = USB_QUIRK_DELAY_INIT |
++	  USB_QUIRK_DELAY_CTRL_MSG },
++
+ 	/* MIDI keyboard WORLDE MINI */
+ 	{ USB_DEVICE(0x1c75, 0x0204), .driver_info =
+ 			USB_QUIRK_CONFIG_INTF_STRINGS },
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 4d3c79d90a6e..9460d42f8675 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -2484,14 +2484,7 @@ static int dwc3_gadget_ep_reclaim_trb_linear(struct dwc3_ep *dep,
+ 
+ static bool dwc3_gadget_ep_request_completed(struct dwc3_request *req)
+ {
+-	/*
+-	 * For OUT direction, host may send less than the setup
+-	 * length. Return true for all OUT requests.
+-	 */
+-	if (!req->direction)
+-		return true;
+-
+-	return req->request.actual == req->request.length;
++	return req->num_pending_sgs == 0;
+ }
+ 
+ static int dwc3_gadget_ep_cleanup_completed_request(struct dwc3_ep *dep,
+@@ -2515,8 +2508,7 @@ static int dwc3_gadget_ep_cleanup_completed_request(struct dwc3_ep *dep,
+ 
+ 	req->request.actual = req->request.length - req->remaining;
+ 
+-	if (!dwc3_gadget_ep_request_completed(req) ||
+-			req->num_pending_sgs) {
++	if (!dwc3_gadget_ep_request_completed(req)) {
+ 		__dwc3_gadget_kick_transfer(dep);
+ 		goto out;
+ 	}
+diff --git a/drivers/usb/early/xhci-dbc.c b/drivers/usb/early/xhci-dbc.c
+index 971c6b92484a..171280c80228 100644
+--- a/drivers/usb/early/xhci-dbc.c
++++ b/drivers/usb/early/xhci-dbc.c
+@@ -728,19 +728,19 @@ static void xdbc_handle_tx_event(struct xdbc_trb *evt_trb)
+ 	case COMP_USB_TRANSACTION_ERROR:
+ 	case COMP_STALL_ERROR:
+ 	default:
+-		if (ep_id == XDBC_EPID_OUT)
++		if (ep_id == XDBC_EPID_OUT || ep_id == XDBC_EPID_OUT_INTEL)
+ 			xdbc.flags |= XDBC_FLAGS_OUT_STALL;
+-		if (ep_id == XDBC_EPID_IN)
++		if (ep_id == XDBC_EPID_IN || ep_id == XDBC_EPID_IN_INTEL)
+ 			xdbc.flags |= XDBC_FLAGS_IN_STALL;
+ 
+ 		xdbc_trace("endpoint %d stalled\n", ep_id);
+ 		break;
+ 	}
+ 
+-	if (ep_id == XDBC_EPID_IN) {
++	if (ep_id == XDBC_EPID_IN || ep_id == XDBC_EPID_IN_INTEL) {
+ 		xdbc.flags &= ~XDBC_FLAGS_IN_PROCESS;
+ 		xdbc_bulk_transfer(NULL, XDBC_MAX_PACKET, true);
+-	} else if (ep_id == XDBC_EPID_OUT) {
++	} else if (ep_id == XDBC_EPID_OUT || ep_id == XDBC_EPID_OUT_INTEL) {
+ 		xdbc.flags &= ~XDBC_FLAGS_OUT_PROCESS;
+ 	} else {
+ 		xdbc_trace("invalid endpoint id %d\n", ep_id);
+diff --git a/drivers/usb/early/xhci-dbc.h b/drivers/usb/early/xhci-dbc.h
+index 673686eeddd7..6e2b7266a695 100644
+--- a/drivers/usb/early/xhci-dbc.h
++++ b/drivers/usb/early/xhci-dbc.h
+@@ -120,8 +120,22 @@ struct xdbc_ring {
+ 	u32			cycle_state;
+ };
+ 
+-#define XDBC_EPID_OUT		2
+-#define XDBC_EPID_IN		3
++/*
++ * These are the "Endpoint ID" (also known as "Context Index") values for the
++ * OUT Transfer Ring and the IN Transfer Ring of a Debug Capability Context data
++ * structure.
++ * According to the "eXtensible Host Controller Interface for Universal Serial
++ * Bus (xHCI)" specification, section "7.6.3.2 Endpoint Contexts and Transfer
++ * Rings", these should be 0 and 1, and those are the values AMD machines give
++ * you; but Intel machines seem to use the formula from section "4.5.1 Device
++ * Context Index", which is supposed to be used for the Device Context only.
++ * Luckily the values from Intel don't overlap with those from AMD, so we can
++ * just test for both.
++ */
++#define XDBC_EPID_OUT		0
++#define XDBC_EPID_IN		1
++#define XDBC_EPID_OUT_INTEL	2
++#define XDBC_EPID_IN_INTEL	3
+ 
+ struct xdbc_state {
+ 	u16			vendor;
+diff --git a/drivers/usb/gadget/function/f_fs.c b/drivers/usb/gadget/function/f_fs.c
+index 767f30b86645..edfb70874c46 100644
+--- a/drivers/usb/gadget/function/f_fs.c
++++ b/drivers/usb/gadget/function/f_fs.c
+@@ -1813,6 +1813,10 @@ static void ffs_data_reset(struct ffs_data *ffs)
+ 	ffs->state = FFS_READ_DESCRIPTORS;
+ 	ffs->setup_state = FFS_NO_SETUP;
+ 	ffs->flags = 0;
++
++	ffs->ms_os_descs_ext_prop_count = 0;
++	ffs->ms_os_descs_ext_prop_name_len = 0;
++	ffs->ms_os_descs_ext_prop_data_len = 0;
+ }
+ 
+ 
+diff --git a/drivers/usb/host/xhci-hub.c b/drivers/usb/host/xhci-hub.c
+index af92b2576fe9..3196de2931b1 100644
+--- a/drivers/usb/host/xhci-hub.c
++++ b/drivers/usb/host/xhci-hub.c
+@@ -1306,7 +1306,47 @@ int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 					 wIndex, link_state);
+ 				goto error;
+ 			}
++
++			/*
++			 * set link to U0, steps depend on current link state.
++			 * U3: set link to U0 and wait for u3exit completion.
++			 * U1/U2:  no PLC complete event, only set link to U0.
++			 * Resume/Recovery: device initiated U0, only wait for
++			 * completion
++			 */
++			if (link_state == USB_SS_PORT_LS_U0) {
++				u32 pls = temp & PORT_PLS_MASK;
++				bool wait_u0 = false;
++
++				/* already in U0 */
++				if (pls == XDEV_U0)
++					break;
++				if (pls == XDEV_U3 ||
++				    pls == XDEV_RESUME ||
++				    pls == XDEV_RECOVERY) {
++					wait_u0 = true;
++					reinit_completion(&bus_state->u3exit_done[wIndex]);
++				}
++				if (pls <= XDEV_U3) /* U1, U2, U3 */
++					xhci_set_link_state(xhci, ports[wIndex],
++							    USB_SS_PORT_LS_U0);
++				if (!wait_u0) {
++					if (pls > XDEV_U3)
++						goto error;
++					break;
++				}
++				spin_unlock_irqrestore(&xhci->lock, flags);
++				if (!wait_for_completion_timeout(&bus_state->u3exit_done[wIndex],
++								 msecs_to_jiffies(100)))
++					xhci_dbg(xhci, "missing U0 port change event for port %d\n",
++						 wIndex);
++				spin_lock_irqsave(&xhci->lock, flags);
++				temp = readl(ports[wIndex]->addr);
++				break;
++			}
++
+ 			if (link_state == USB_SS_PORT_LS_U3) {
++				int retries = 16;
+ 				slot_id = xhci_find_slot_id_by_port(hcd, xhci,
+ 						wIndex + 1);
+ 				if (slot_id) {
+@@ -1317,17 +1357,18 @@ int xhci_hub_control(struct usb_hcd *hcd, u16 typeReq, u16 wValue,
+ 					xhci_stop_device(xhci, slot_id, 1);
+ 					spin_lock_irqsave(&xhci->lock, flags);
+ 				}
+-			}
+-
+-			xhci_set_link_state(xhci, ports[wIndex], link_state);
+-
+-			spin_unlock_irqrestore(&xhci->lock, flags);
+-			msleep(20); /* wait device to enter */
+-			spin_lock_irqsave(&xhci->lock, flags);
+-
+-			temp = readl(ports[wIndex]->addr);
+-			if (link_state == USB_SS_PORT_LS_U3)
++				xhci_set_link_state(xhci, ports[wIndex], USB_SS_PORT_LS_U3);
++				spin_unlock_irqrestore(&xhci->lock, flags);
++				while (retries--) {
++					usleep_range(4000, 8000);
++					temp = readl(ports[wIndex]->addr);
++					if ((temp & PORT_PLS_MASK) == XDEV_U3)
++						break;
++				}
++				spin_lock_irqsave(&xhci->lock, flags);
++				temp = readl(ports[wIndex]->addr);
+ 				bus_state->suspended_ports |= 1 << wIndex;
++			}
+ 			break;
+ 		case USB_PORT_FEAT_POWER:
+ 			/*
+@@ -1528,6 +1569,8 @@ int xhci_hub_status_data(struct usb_hcd *hcd, char *buf)
+ 		}
+ 		if ((temp & PORT_RC))
+ 			reset_change = true;
++		if (temp & PORT_OC)
++			status = 1;
+ 	}
+ 	if (!status && !reset_change) {
+ 		xhci_dbg(xhci, "%s: stopping port polling.\n", __func__);
+@@ -1593,6 +1636,13 @@ retry:
+ 				 port_index);
+ 			goto retry;
+ 		}
++		/* bail out if port detected a over-current condition */
++		if (t1 & PORT_OC) {
++			bus_state->bus_suspended = 0;
++			spin_unlock_irqrestore(&xhci->lock, flags);
++			xhci_dbg(xhci, "Bus suspend bailout, port over-current detected\n");
++			return -EBUSY;
++		}
+ 		/* suspend ports in U0, or bail out for new connect changes */
+ 		if ((t1 & PORT_PE) && (t1 & PORT_PLS_MASK) == XDEV_U0) {
+ 			if ((t1 & PORT_CSC) && wake_enabled) {
+diff --git a/drivers/usb/host/xhci-mem.c b/drivers/usb/host/xhci-mem.c
+index 884c601bfa15..9764122c9cdf 100644
+--- a/drivers/usb/host/xhci-mem.c
++++ b/drivers/usb/host/xhci-mem.c
+@@ -2552,6 +2552,7 @@ int xhci_mem_init(struct xhci_hcd *xhci, gfp_t flags)
+ 		xhci->usb3_rhub.bus_state.resume_done[i] = 0;
+ 		/* Only the USB 2.0 completions will ever be used. */
+ 		init_completion(&xhci->usb2_rhub.bus_state.rexit_done[i]);
++		init_completion(&xhci->usb3_rhub.bus_state.u3exit_done[i]);
+ 	}
+ 
+ 	if (scratchpad_alloc(xhci, flags))
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index d23f7408c81f..2fbc00c0a6e8 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -547,6 +547,23 @@ void xhci_find_new_dequeue_state(struct xhci_hcd *xhci,
+ 				stream_id);
+ 		return;
+ 	}
++	/*
++	 * A cancelled TD can complete with a stall if HW cached the trb.
++	 * In this case driver can't find cur_td, but if the ring is empty we
++	 * can move the dequeue pointer to the current enqueue position.
++	 */
++	if (!cur_td) {
++		if (list_empty(&ep_ring->td_list)) {
++			state->new_deq_seg = ep_ring->enq_seg;
++			state->new_deq_ptr = ep_ring->enqueue;
++			state->new_cycle_state = ep_ring->cycle_state;
++			goto done;
++		} else {
++			xhci_warn(xhci, "Can't find new dequeue state, missing cur_td\n");
++			return;
++		}
++	}
++
+ 	/* Dig out the cycle state saved by the xHC during the stop ep cmd */
+ 	xhci_dbg_trace(xhci, trace_xhci_dbg_cancel_urb,
+ 			"Finding endpoint context");
+@@ -592,6 +609,7 @@ void xhci_find_new_dequeue_state(struct xhci_hcd *xhci,
+ 	state->new_deq_seg = new_seg;
+ 	state->new_deq_ptr = new_deq;
+ 
++done:
+ 	/* Don't update the ring cycle state for the producer (us). */
+ 	xhci_dbg_trace(xhci, trace_xhci_dbg_cancel_urb,
+ 			"Cycle state = 0x%x", state->new_cycle_state);
+@@ -1677,6 +1695,7 @@ static void handle_port_status(struct xhci_hcd *xhci,
+ 	     (portsc & PORT_PLS_MASK) == XDEV_U1 ||
+ 	     (portsc & PORT_PLS_MASK) == XDEV_U2)) {
+ 		xhci_dbg(xhci, "resume SS port %d finished\n", port_id);
++		complete(&bus_state->u3exit_done[hcd_portnum]);
+ 		/* We've just brought the device into U0/1/2 through either the
+ 		 * Resume state after a device remote wakeup, or through the
+ 		 * U3Exit state after a host-initiated resume.  If it's a device
+@@ -1851,8 +1870,8 @@ static void xhci_cleanup_halted_endpoint(struct xhci_hcd *xhci,
+ 
+ 	if (reset_type == EP_HARD_RESET) {
+ 		ep->ep_state |= EP_HARD_CLEAR_TOGGLE;
+-		xhci_cleanup_stalled_ring(xhci, ep_index, stream_id, td);
+-		xhci_clear_hub_tt_buffer(xhci, td, ep);
++		xhci_cleanup_stalled_ring(xhci, slot_id, ep_index, stream_id,
++					  td);
+ 	}
+ 	xhci_ring_cmd_db(xhci);
+ }
+@@ -1973,11 +1992,18 @@ static int finish_td(struct xhci_hcd *xhci, struct xhci_td *td,
+ 	if (trb_comp_code == COMP_STALL_ERROR ||
+ 		xhci_requires_manual_halt_cleanup(xhci, ep_ctx,
+ 						trb_comp_code)) {
+-		/* Issue a reset endpoint command to clear the host side
+-		 * halt, followed by a set dequeue command to move the
+-		 * dequeue pointer past the TD.
+-		 * The class driver clears the device side halt later.
++		/*
++		 * xhci internal endpoint state will go to a "halt" state for
++		 * any stall, including default control pipe protocol stall.
++		 * To clear the host side halt we need to issue a reset endpoint
++		 * command, followed by a set dequeue command to move past the
++		 * TD.
++		 * Class drivers clear the device side halt from a functional
++		 * stall later. Hub TT buffer should only be cleared for FS/LS
++		 * devices behind HS hubs for functional stalls.
+ 		 */
++		if ((ep_index != 0) || (trb_comp_code != COMP_STALL_ERROR))
++			xhci_clear_hub_tt_buffer(xhci, td, ep);
+ 		xhci_cleanup_halted_endpoint(xhci, slot_id, ep_index,
+ 					ep_ring->stream_id, td, EP_HARD_RESET);
+ 	} else {
+@@ -2530,6 +2556,15 @@ static int handle_tx_event(struct xhci_hcd *xhci,
+ 				xhci_dbg(xhci, "td_list is empty while skip flag set. Clear skip flag for slot %u ep %u.\n",
+ 					 slot_id, ep_index);
+ 			}
++			if (trb_comp_code == COMP_STALL_ERROR ||
++			    xhci_requires_manual_halt_cleanup(xhci, ep_ctx,
++							      trb_comp_code)) {
++				xhci_cleanup_halted_endpoint(xhci, slot_id,
++							     ep_index,
++							     ep_ring->stream_id,
++							     NULL,
++							     EP_HARD_RESET);
++			}
+ 			goto cleanup;
+ 		}
+ 
+diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
+index fe38275363e0..bee5deccc83d 100644
+--- a/drivers/usb/host/xhci.c
++++ b/drivers/usb/host/xhci.c
+@@ -3031,19 +3031,19 @@ static void xhci_setup_input_ctx_for_quirk(struct xhci_hcd *xhci,
+ 			added_ctxs, added_ctxs);
+ }
+ 
+-void xhci_cleanup_stalled_ring(struct xhci_hcd *xhci, unsigned int ep_index,
+-			       unsigned int stream_id, struct xhci_td *td)
++void xhci_cleanup_stalled_ring(struct xhci_hcd *xhci, unsigned int slot_id,
++			       unsigned int ep_index, unsigned int stream_id,
++			       struct xhci_td *td)
+ {
+ 	struct xhci_dequeue_state deq_state;
+-	struct usb_device *udev = td->urb->dev;
+ 
+ 	xhci_dbg_trace(xhci, trace_xhci_dbg_reset_ep,
+ 			"Cleaning up stalled endpoint ring");
+ 	/* We need to move the HW's dequeue pointer past this TD,
+ 	 * or it will attempt to resend it on the next doorbell ring.
+ 	 */
+-	xhci_find_new_dequeue_state(xhci, udev->slot_id,
+-			ep_index, stream_id, td, &deq_state);
++	xhci_find_new_dequeue_state(xhci, slot_id, ep_index, stream_id, td,
++				    &deq_state);
+ 
+ 	if (!deq_state.new_deq_ptr || !deq_state.new_deq_seg)
+ 		return;
+@@ -3054,7 +3054,7 @@ void xhci_cleanup_stalled_ring(struct xhci_hcd *xhci, unsigned int ep_index,
+ 	if (!(xhci->quirks & XHCI_RESET_EP_QUIRK)) {
+ 		xhci_dbg_trace(xhci, trace_xhci_dbg_reset_ep,
+ 				"Queueing new dequeue state");
+-		xhci_queue_new_dequeue_state(xhci, udev->slot_id,
++		xhci_queue_new_dequeue_state(xhci, slot_id,
+ 				ep_index, &deq_state);
+ 	} else {
+ 		/* Better hope no one uses the input context between now and the
+@@ -3065,7 +3065,7 @@ void xhci_cleanup_stalled_ring(struct xhci_hcd *xhci, unsigned int ep_index,
+ 		xhci_dbg_trace(xhci, trace_xhci_dbg_quirks,
+ 				"Setting up input context for "
+ 				"configure endpoint command");
+-		xhci_setup_input_ctx_for_quirk(xhci, udev->slot_id,
++		xhci_setup_input_ctx_for_quirk(xhci, slot_id,
+ 				ep_index, &deq_state);
+ 	}
+ }
+diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
+index 3ecee10fdcdc..02f972e464ab 100644
+--- a/drivers/usb/host/xhci.h
++++ b/drivers/usb/host/xhci.h
+@@ -1694,6 +1694,7 @@ struct xhci_bus_state {
+ 	/* Which ports are waiting on RExit to U0 transition. */
+ 	unsigned long		rexit_ports;
+ 	struct completion	rexit_done[USB_MAXCHILDREN];
++	struct completion	u3exit_done[USB_MAXCHILDREN];
+ };
+ 
+ 
+@@ -2115,8 +2116,9 @@ void xhci_find_new_dequeue_state(struct xhci_hcd *xhci,
+ void xhci_queue_new_dequeue_state(struct xhci_hcd *xhci,
+ 		unsigned int slot_id, unsigned int ep_index,
+ 		struct xhci_dequeue_state *deq_state);
+-void xhci_cleanup_stalled_ring(struct xhci_hcd *xhci, unsigned int ep_index,
+-		unsigned int stream_id, struct xhci_td *td);
++void xhci_cleanup_stalled_ring(struct xhci_hcd *xhci, unsigned int slot_id,
++			       unsigned int ep_index, unsigned int stream_id,
++			       struct xhci_td *td);
+ void xhci_stop_endpoint_command_watchdog(struct timer_list *t);
+ void xhci_handle_command_timeout(struct work_struct *work);
+ 
+diff --git a/drivers/usb/misc/sisusbvga/sisusb.c b/drivers/usb/misc/sisusbvga/sisusb.c
+index 2ab9600d0898..fc8a5da4a07c 100644
+--- a/drivers/usb/misc/sisusbvga/sisusb.c
++++ b/drivers/usb/misc/sisusbvga/sisusb.c
+@@ -1199,18 +1199,18 @@ static int sisusb_read_mem_bulk(struct sisusb_usb_data *sisusb, u32 addr,
+ /* High level: Gfx (indexed) register access */
+ 
+ #ifdef CONFIG_USB_SISUSBVGA_CON
+-int sisusb_setreg(struct sisusb_usb_data *sisusb, int port, u8 data)
++int sisusb_setreg(struct sisusb_usb_data *sisusb, u32 port, u8 data)
+ {
+ 	return sisusb_write_memio_byte(sisusb, SISUSB_TYPE_IO, port, data);
+ }
+ 
+-int sisusb_getreg(struct sisusb_usb_data *sisusb, int port, u8 *data)
++int sisusb_getreg(struct sisusb_usb_data *sisusb, u32 port, u8 *data)
+ {
+ 	return sisusb_read_memio_byte(sisusb, SISUSB_TYPE_IO, port, data);
+ }
+ #endif
+ 
+-int sisusb_setidxreg(struct sisusb_usb_data *sisusb, int port,
++int sisusb_setidxreg(struct sisusb_usb_data *sisusb, u32 port,
+ 		u8 index, u8 data)
+ {
+ 	int ret;
+@@ -1220,7 +1220,7 @@ int sisusb_setidxreg(struct sisusb_usb_data *sisusb, int port,
+ 	return ret;
+ }
+ 
+-int sisusb_getidxreg(struct sisusb_usb_data *sisusb, int port,
++int sisusb_getidxreg(struct sisusb_usb_data *sisusb, u32 port,
+ 		u8 index, u8 *data)
+ {
+ 	int ret;
+@@ -1230,7 +1230,7 @@ int sisusb_getidxreg(struct sisusb_usb_data *sisusb, int port,
+ 	return ret;
+ }
+ 
+-int sisusb_setidxregandor(struct sisusb_usb_data *sisusb, int port, u8 idx,
++int sisusb_setidxregandor(struct sisusb_usb_data *sisusb, u32 port, u8 idx,
+ 		u8 myand, u8 myor)
+ {
+ 	int ret;
+@@ -1245,7 +1245,7 @@ int sisusb_setidxregandor(struct sisusb_usb_data *sisusb, int port, u8 idx,
+ }
+ 
+ static int sisusb_setidxregmask(struct sisusb_usb_data *sisusb,
+-		int port, u8 idx, u8 data, u8 mask)
++		u32 port, u8 idx, u8 data, u8 mask)
+ {
+ 	int ret;
+ 	u8 tmp;
+@@ -1258,13 +1258,13 @@ static int sisusb_setidxregmask(struct sisusb_usb_data *sisusb,
+ 	return ret;
+ }
+ 
+-int sisusb_setidxregor(struct sisusb_usb_data *sisusb, int port,
++int sisusb_setidxregor(struct sisusb_usb_data *sisusb, u32 port,
+ 		u8 index, u8 myor)
+ {
+ 	return sisusb_setidxregandor(sisusb, port, index, 0xff, myor);
+ }
+ 
+-int sisusb_setidxregand(struct sisusb_usb_data *sisusb, int port,
++int sisusb_setidxregand(struct sisusb_usb_data *sisusb, u32 port,
+ 		u8 idx, u8 myand)
+ {
+ 	return sisusb_setidxregandor(sisusb, port, idx, myand, 0x00);
+@@ -2785,8 +2785,8 @@ static loff_t sisusb_lseek(struct file *file, loff_t offset, int orig)
+ static int sisusb_handle_command(struct sisusb_usb_data *sisusb,
+ 		struct sisusb_command *y, unsigned long arg)
+ {
+-	int	retval, port, length;
+-	u32	address;
++	int	retval, length;
++	u32	port, address;
+ 
+ 	/* All our commands require the device
+ 	 * to be initialized.
+diff --git a/drivers/usb/misc/sisusbvga/sisusb_init.h b/drivers/usb/misc/sisusbvga/sisusb_init.h
+index 1782c759c4ad..ace09985dae4 100644
+--- a/drivers/usb/misc/sisusbvga/sisusb_init.h
++++ b/drivers/usb/misc/sisusbvga/sisusb_init.h
+@@ -812,17 +812,17 @@ static const struct SiS_VCLKData SiSUSB_VCLKData[] = {
+ int SiSUSBSetMode(struct SiS_Private *SiS_Pr, unsigned short ModeNo);
+ int SiSUSBSetVESAMode(struct SiS_Private *SiS_Pr, unsigned short VModeNo);
+ 
+-extern int sisusb_setreg(struct sisusb_usb_data *sisusb, int port, u8 data);
+-extern int sisusb_getreg(struct sisusb_usb_data *sisusb, int port, u8 * data);
+-extern int sisusb_setidxreg(struct sisusb_usb_data *sisusb, int port,
++extern int sisusb_setreg(struct sisusb_usb_data *sisusb, u32 port, u8 data);
++extern int sisusb_getreg(struct sisusb_usb_data *sisusb, u32 port, u8 * data);
++extern int sisusb_setidxreg(struct sisusb_usb_data *sisusb, u32 port,
+ 			    u8 index, u8 data);
+-extern int sisusb_getidxreg(struct sisusb_usb_data *sisusb, int port,
++extern int sisusb_getidxreg(struct sisusb_usb_data *sisusb, u32 port,
+ 			    u8 index, u8 * data);
+-extern int sisusb_setidxregandor(struct sisusb_usb_data *sisusb, int port,
++extern int sisusb_setidxregandor(struct sisusb_usb_data *sisusb, u32 port,
+ 				 u8 idx, u8 myand, u8 myor);
+-extern int sisusb_setidxregor(struct sisusb_usb_data *sisusb, int port,
++extern int sisusb_setidxregor(struct sisusb_usb_data *sisusb, u32 port,
+ 			      u8 index, u8 myor);
+-extern int sisusb_setidxregand(struct sisusb_usb_data *sisusb, int port,
++extern int sisusb_setidxregand(struct sisusb_usb_data *sisusb, u32 port,
+ 			       u8 idx, u8 myand);
+ 
+ void sisusb_delete(struct kref *kref);
+diff --git a/drivers/usb/storage/uas.c b/drivers/usb/storage/uas.c
+index 3670fda02c34..d592071119ba 100644
+--- a/drivers/usb/storage/uas.c
++++ b/drivers/usb/storage/uas.c
+@@ -81,6 +81,19 @@ static void uas_free_streams(struct uas_dev_info *devinfo);
+ static void uas_log_cmd_state(struct scsi_cmnd *cmnd, const char *prefix,
+ 				int status);
+ 
++/*
++ * This driver needs its own workqueue, as we need to control memory allocation.
++ *
++ * In the course of error handling and power management uas_wait_for_pending_cmnds()
++ * needs to flush pending work items. In these contexts we cannot allocate memory
++ * by doing block IO as we would deadlock. For the same reason we cannot wait
++ * for anything allocating memory not heeding these constraints.
++ *
++ * So we have to control all work items that can be on the workqueue we flush.
++ * Hence we cannot share a queue and need our own.
++ */
++static struct workqueue_struct *workqueue;
++
+ static void uas_do_work(struct work_struct *work)
+ {
+ 	struct uas_dev_info *devinfo =
+@@ -109,7 +122,7 @@ static void uas_do_work(struct work_struct *work)
+ 		if (!err)
+ 			cmdinfo->state &= ~IS_IN_WORK_LIST;
+ 		else
+-			schedule_work(&devinfo->work);
++			queue_work(workqueue, &devinfo->work);
+ 	}
+ out:
+ 	spin_unlock_irqrestore(&devinfo->lock, flags);
+@@ -134,7 +147,7 @@ static void uas_add_work(struct uas_cmd_info *cmdinfo)
+ 
+ 	lockdep_assert_held(&devinfo->lock);
+ 	cmdinfo->state |= IS_IN_WORK_LIST;
+-	schedule_work(&devinfo->work);
++	queue_work(workqueue, &devinfo->work);
+ }
+ 
+ static void uas_zap_pending(struct uas_dev_info *devinfo, int result)
+@@ -190,6 +203,9 @@ static void uas_log_cmd_state(struct scsi_cmnd *cmnd, const char *prefix,
+ 	struct uas_cmd_info *ci = (void *)&cmnd->SCp;
+ 	struct uas_cmd_info *cmdinfo = (void *)&cmnd->SCp;
+ 
++	if (status == -ENODEV) /* too late */
++		return;
++
+ 	scmd_printk(KERN_INFO, cmnd,
+ 		    "%s %d uas-tag %d inflight:%s%s%s%s%s%s%s%s%s%s%s%s ",
+ 		    prefix, status, cmdinfo->uas_tag,
+@@ -1226,7 +1242,31 @@ static struct usb_driver uas_driver = {
+ 	.id_table = uas_usb_ids,
+ };
+ 
+-module_usb_driver(uas_driver);
++static int __init uas_init(void)
++{
++	int rv;
++
++	workqueue = alloc_workqueue("uas", WQ_MEM_RECLAIM, 0);
++	if (!workqueue)
++		return -ENOMEM;
++
++	rv = usb_register(&uas_driver);
++	if (rv) {
++		destroy_workqueue(workqueue);
++		return -ENOMEM;
++	}
++
++	return 0;
++}
++
++static void __exit uas_exit(void)
++{
++	usb_deregister(&uas_driver);
++	destroy_workqueue(workqueue);
++}
++
++module_init(uas_init);
++module_exit(uas_exit);
+ 
+ MODULE_LICENSE("GPL");
+ MODULE_IMPORT_NS(USB_STORAGE);
+diff --git a/drivers/usb/storage/unusual_devs.h b/drivers/usb/storage/unusual_devs.h
+index 1880f3e13f57..f6c3681fa2e9 100644
+--- a/drivers/usb/storage/unusual_devs.h
++++ b/drivers/usb/storage/unusual_devs.h
+@@ -2323,6 +2323,13 @@ UNUSUAL_DEV(  0x3340, 0xffff, 0x0000, 0x0000,
+ 		USB_SC_DEVICE,USB_PR_DEVICE,NULL,
+ 		US_FL_MAX_SECTORS_64 ),
+ 
++/* Reported by Cyril Roelandt <tipecaml@gmail.com> */
++UNUSUAL_DEV(  0x357d, 0x7788, 0x0114, 0x0114,
++		"JMicron",
++		"USB to ATA/ATAPI Bridge",
++		USB_SC_DEVICE, USB_PR_DEVICE, NULL,
++		US_FL_BROKEN_FUA ),
++
+ /* Reported by Andrey Rahmatullin <wrar@altlinux.org> */
+ UNUSUAL_DEV(  0x4102, 0x1020, 0x0100,  0x0100,
+ 		"iRiver",
+diff --git a/drivers/usb/typec/bus.c b/drivers/usb/typec/bus.c
+index 2e45eb479386..f241037df7cb 100644
+--- a/drivers/usb/typec/bus.c
++++ b/drivers/usb/typec/bus.c
+@@ -208,7 +208,10 @@ EXPORT_SYMBOL_GPL(typec_altmode_vdm);
+ const struct typec_altmode *
+ typec_altmode_get_partner(struct typec_altmode *adev)
+ {
+-	return adev ? &to_altmode(adev)->partner->adev : NULL;
++	if (!adev || !to_altmode(adev)->partner)
++		return NULL;
++
++	return &to_altmode(adev)->partner->adev;
+ }
+ EXPORT_SYMBOL_GPL(typec_altmode_get_partner);
+ 
+diff --git a/drivers/usb/typec/tcpm/tcpm.c b/drivers/usb/typec/tcpm/tcpm.c
+index f3087ef8265c..c033dfb2dd8a 100644
+--- a/drivers/usb/typec/tcpm/tcpm.c
++++ b/drivers/usb/typec/tcpm/tcpm.c
+@@ -3759,6 +3759,14 @@ static void _tcpm_cc_change(struct tcpm_port *port, enum typec_cc_status cc1,
+ 		 */
+ 		break;
+ 
++	case PORT_RESET:
++	case PORT_RESET_WAIT_OFF:
++		/*
++		 * State set back to default mode once the timer completes.
++		 * Ignore CC changes here.
++		 */
++		break;
++
+ 	default:
+ 		if (tcpm_port_is_disconnected(port))
+ 			tcpm_set_state(port, unattached_state(port), 0);
+@@ -3820,6 +3828,15 @@ static void _tcpm_pd_vbus_on(struct tcpm_port *port)
+ 	case SRC_TRY_DEBOUNCE:
+ 		/* Do nothing, waiting for sink detection */
+ 		break;
++
++	case PORT_RESET:
++	case PORT_RESET_WAIT_OFF:
++		/*
++		 * State set back to default mode once the timer completes.
++		 * Ignore vbus changes here.
++		 */
++		break;
++
+ 	default:
+ 		break;
+ 	}
+@@ -3873,10 +3890,19 @@ static void _tcpm_pd_vbus_off(struct tcpm_port *port)
+ 	case PORT_RESET_WAIT_OFF:
+ 		tcpm_set_state(port, tcpm_default_state(port), 0);
+ 		break;
++
+ 	case SRC_TRY_WAIT:
+ 	case SRC_TRY_DEBOUNCE:
+ 		/* Do nothing, waiting for sink detection */
+ 		break;
++
++	case PORT_RESET:
++		/*
++		 * State set back to default mode once the timer completes.
++		 * Ignore vbus changes here.
++		 */
++		break;
++
+ 	default:
+ 		if (port->pwr_role == TYPEC_SINK &&
+ 		    port->attached)
+diff --git a/drivers/watchdog/watchdog_dev.c b/drivers/watchdog/watchdog_dev.c
+index 8b5c742f24e8..7e4cd34a8c20 100644
+--- a/drivers/watchdog/watchdog_dev.c
++++ b/drivers/watchdog/watchdog_dev.c
+@@ -282,6 +282,7 @@ static int watchdog_start(struct watchdog_device *wdd)
+ 	if (err == 0) {
+ 		set_bit(WDOG_ACTIVE, &wdd->status);
+ 		wd_data->last_keepalive = started_at;
++		wd_data->last_hw_keepalive = started_at;
+ 		watchdog_update_worker(wdd);
+ 	}
+ 
+diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c
+index 28ae0c134700..d050acc1fd5d 100644
+--- a/fs/ceph/caps.c
++++ b/fs/ceph/caps.c
+@@ -1973,8 +1973,12 @@ retry_locked:
+ 		}
+ 
+ 		/* want more caps from mds? */
+-		if (want & ~(cap->mds_wanted | cap->issued))
+-			goto ack;
++		if (want & ~cap->mds_wanted) {
++			if (want & ~(cap->mds_wanted | cap->issued))
++				goto ack;
++			if (!__cap_is_valid(cap))
++				goto ack;
++		}
+ 
+ 		/* things we might delay */
+ 		if ((cap->issued & ~retain) == 0)
+diff --git a/fs/ceph/export.c b/fs/ceph/export.c
+index b6bfa94332c3..79dc06881e78 100644
+--- a/fs/ceph/export.c
++++ b/fs/ceph/export.c
+@@ -315,6 +315,11 @@ static struct dentry *__get_parent(struct super_block *sb,
+ 
+ 	req->r_num_caps = 1;
+ 	err = ceph_mdsc_do_request(mdsc, NULL, req);
++	if (err) {
++		ceph_mdsc_put_request(req);
++		return ERR_PTR(err);
++	}
++
+ 	inode = req->r_target_inode;
+ 	if (inode)
+ 		ihold(inode);
+diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c
+index 788344b5949e..cd0e7f5005cb 100644
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -693,6 +693,11 @@ int open_shroot(unsigned int xid, struct cifs_tcon *tcon,
+ 	if (smb3_encryption_required(tcon))
+ 		flags |= CIFS_TRANSFORM_REQ;
+ 
++	if (!server->ops->new_lease_key)
++		return -EIO;
++
++	server->ops->new_lease_key(pfid);
++
+ 	memset(rqst, 0, sizeof(rqst));
+ 	resp_buftype[0] = resp_buftype[1] = CIFS_NO_BUFFER;
+ 	memset(rsp_iov, 0, sizeof(rsp_iov));
+diff --git a/fs/coredump.c b/fs/coredump.c
+index f8296a82d01d..408418e6aa13 100644
+--- a/fs/coredump.c
++++ b/fs/coredump.c
+@@ -211,6 +211,8 @@ static int format_corename(struct core_name *cn, struct coredump_params *cprm,
+ 			return -ENOMEM;
+ 		(*argv)[(*argc)++] = 0;
+ 		++pat_ptr;
++		if (!(*pat_ptr))
++			return -ENOMEM;
+ 	}
+ 
+ 	/* Repeat as long as we have more pattern to process and more output
+diff --git a/fs/nfsd/nfs4callback.c b/fs/nfsd/nfs4callback.c
+index c3b11a715082..5cf91322de0f 100644
+--- a/fs/nfsd/nfs4callback.c
++++ b/fs/nfsd/nfs4callback.c
+@@ -1312,6 +1312,7 @@ nfsd4_run_cb_work(struct work_struct *work)
+ 		container_of(work, struct nfsd4_callback, cb_work);
+ 	struct nfs4_client *clp = cb->cb_clp;
+ 	struct rpc_clnt *clnt;
++	int flags;
+ 
+ 	if (cb->cb_need_restart) {
+ 		cb->cb_need_restart = false;
+@@ -1340,7 +1341,8 @@ nfsd4_run_cb_work(struct work_struct *work)
+ 	}
+ 
+ 	cb->cb_msg.rpc_cred = clp->cl_cb_cred;
+-	rpc_call_async(clnt, &cb->cb_msg, RPC_TASK_SOFT | RPC_TASK_SOFTCONN,
++	flags = clp->cl_minorversion ? RPC_TASK_NOCONNECT : RPC_TASK_SOFTCONN;
++	rpc_call_async(clnt, &cb->cb_msg, RPC_TASK_SOFT | flags,
+ 			cb->cb_ops ? &nfsd4_cb_ops : &nfsd4_cb_probe_ops, cb);
+ }
+ 
+diff --git a/fs/proc/vmcore.c b/fs/proc/vmcore.c
+index 7dc800cce354..c663202da8de 100644
+--- a/fs/proc/vmcore.c
++++ b/fs/proc/vmcore.c
+@@ -266,7 +266,8 @@ static int vmcoredd_mmap_dumps(struct vm_area_struct *vma, unsigned long dst,
+ 		if (start < offset + dump->size) {
+ 			tsz = min(offset + (u64)dump->size - start, (u64)size);
+ 			buf = dump->buf + start - offset;
+-			if (remap_vmalloc_range_partial(vma, dst, buf, tsz)) {
++			if (remap_vmalloc_range_partial(vma, dst, buf, 0,
++							tsz)) {
+ 				ret = -EFAULT;
+ 				goto out_unlock;
+ 			}
+@@ -624,7 +625,7 @@ static int mmap_vmcore(struct file *file, struct vm_area_struct *vma)
+ 		tsz = min(elfcorebuf_sz + elfnotes_sz - (size_t)start, size);
+ 		kaddr = elfnotes_buf + start - elfcorebuf_sz - vmcoredd_orig_sz;
+ 		if (remap_vmalloc_range_partial(vma, vma->vm_start + len,
+-						kaddr, tsz))
++						kaddr, 0, tsz))
+ 			goto fail;
+ 
+ 		size -= tsz;
+diff --git a/fs/xfs/xfs_super.c b/fs/xfs/xfs_super.c
+index 2094386af8ac..68fea439d974 100644
+--- a/fs/xfs/xfs_super.c
++++ b/fs/xfs/xfs_super.c
+@@ -1861,7 +1861,8 @@ xfs_init_zones(void)
+ 
+ 	xfs_ili_zone = kmem_cache_create("xfs_ili",
+ 					 sizeof(struct xfs_inode_log_item), 0,
+-					 SLAB_MEM_SPREAD, NULL);
++					 SLAB_RECLAIM_ACCOUNT | SLAB_MEM_SPREAD,
++					 NULL);
+ 	if (!xfs_ili_zone)
+ 		goto out_destroy_inode_zone;
+ 
+diff --git a/include/linux/iio/iio.h b/include/linux/iio/iio.h
+index 862ce0019eba..d91c1e21dc70 100644
+--- a/include/linux/iio/iio.h
++++ b/include/linux/iio/iio.h
+@@ -598,7 +598,7 @@ void iio_device_unregister(struct iio_dev *indio_dev);
+  * 0 on success, negative error number on failure.
+  */
+ #define devm_iio_device_register(dev, indio_dev) \
+-	__devm_iio_device_register((dev), (indio_dev), THIS_MODULE);
++	__devm_iio_device_register((dev), (indio_dev), THIS_MODULE)
+ int __devm_iio_device_register(struct device *dev, struct iio_dev *indio_dev,
+ 			       struct module *this_mod);
+ void devm_iio_device_unregister(struct device *dev, struct iio_dev *indio_dev);
+diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
+index bcb9b2ac0791..b2a7159f66da 100644
+--- a/include/linux/kvm_host.h
++++ b/include/linux/kvm_host.h
+@@ -1039,7 +1039,7 @@ search_memslots(struct kvm_memslots *slots, gfn_t gfn)
+ 			start = slot + 1;
+ 	}
+ 
+-	if (gfn >= memslots[start].base_gfn &&
++	if (start < slots->used_slots && gfn >= memslots[start].base_gfn &&
+ 	    gfn < memslots[start].base_gfn + memslots[start].npages) {
+ 		atomic_set(&slots->lru_slot, start);
+ 		return &memslots[start];
+diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
+index 0507a162ccd0..a95d3cc74d79 100644
+--- a/include/linux/vmalloc.h
++++ b/include/linux/vmalloc.h
+@@ -137,7 +137,7 @@ extern void vunmap(const void *addr);
+ 
+ extern int remap_vmalloc_range_partial(struct vm_area_struct *vma,
+ 				       unsigned long uaddr, void *kaddr,
+-				       unsigned long size);
++				       unsigned long pgoff, unsigned long size);
+ 
+ extern int remap_vmalloc_range(struct vm_area_struct *vma, void *addr,
+ 							unsigned long pgoff);
+diff --git a/include/net/mac80211.h b/include/net/mac80211.h
+index 77e6b5a83b06..eec6d0a6ae61 100644
+--- a/include/net/mac80211.h
++++ b/include/net/mac80211.h
+@@ -5969,7 +5969,9 @@ enum rate_control_capabilities {
+ struct rate_control_ops {
+ 	unsigned long capa;
+ 	const char *name;
+-	void *(*alloc)(struct ieee80211_hw *hw, struct dentry *debugfsdir);
++	void *(*alloc)(struct ieee80211_hw *hw);
++	void (*add_debugfs)(struct ieee80211_hw *hw, void *priv,
++			    struct dentry *debugfsdir);
+ 	void (*free)(void *priv);
+ 
+ 	void *(*alloc_sta)(void *priv, struct ieee80211_sta *sta, gfp_t gfp);
+diff --git a/include/net/tcp.h b/include/net/tcp.h
+index a5ea27df3c2b..2edb73c27962 100644
+--- a/include/net/tcp.h
++++ b/include/net/tcp.h
+@@ -51,7 +51,7 @@ extern struct inet_hashinfo tcp_hashinfo;
+ extern struct percpu_counter tcp_orphan_count;
+ void tcp_time_wait(struct sock *sk, int state, int timeo);
+ 
+-#define MAX_TCP_HEADER	(128 + MAX_HEADER)
++#define MAX_TCP_HEADER	L1_CACHE_ALIGN(128 + MAX_HEADER)
+ #define MAX_TCP_OPTION_SPACE 40
+ #define TCP_MIN_SND_MSS		48
+ #define TCP_MIN_GSO_SIZE	(TCP_MIN_SND_MSS - MAX_TCP_OPTION_SPACE)
+diff --git a/ipc/util.c b/ipc/util.c
+index fe61df53775a..2d70f25f64b8 100644
+--- a/ipc/util.c
++++ b/ipc/util.c
+@@ -764,13 +764,13 @@ static struct kern_ipc_perm *sysvipc_find_ipc(struct ipc_ids *ids, loff_t pos,
+ 			total++;
+ 	}
+ 
++	*new_pos = pos + 1;
+ 	if (total >= ids->in_use)
+ 		return NULL;
+ 
+ 	for (; pos < ipc_mni; pos++) {
+ 		ipc = idr_find(&ids->ipcs_idr, pos);
+ 		if (ipc != NULL) {
+-			*new_pos = pos + 1;
+ 			rcu_read_lock();
+ 			ipc_lock_object(ipc);
+ 			return ipc;
+diff --git a/kernel/audit.c b/kernel/audit.c
+index 9ddfe2aa6671..7fe3b69bc02a 100644
+--- a/kernel/audit.c
++++ b/kernel/audit.c
+@@ -1326,6 +1326,9 @@ static int audit_receive_msg(struct sk_buff *skb, struct nlmsghdr *nlh)
+ 	case AUDIT_FIRST_USER_MSG2 ... AUDIT_LAST_USER_MSG2:
+ 		if (!audit_enabled && msg_type != AUDIT_USER_AVC)
+ 			return 0;
++		/* exit early if there isn't at least one character to print */
++		if (data_len < 2)
++			return -EINVAL;
+ 
+ 		err = audit_filter(msg_type, AUDIT_FILTER_USER);
+ 		if (err == 1) { /* match or error */
+diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
+index ac7956c38f69..4b24275e306a 100644
+--- a/kernel/dma/direct.c
++++ b/kernel/dma/direct.c
+@@ -39,7 +39,8 @@ static inline struct page *dma_direct_to_page(struct device *dev,
+ 
+ u64 dma_direct_get_required_mask(struct device *dev)
+ {
+-	u64 max_dma = phys_to_dma_direct(dev, (max_pfn - 1) << PAGE_SHIFT);
++	phys_addr_t phys = (phys_addr_t)(max_pfn - 1) << PAGE_SHIFT;
++	u64 max_dma = phys_to_dma_direct(dev, phys);
+ 
+ 	return (1ULL << (fls64(max_dma) - 1)) * 2 - 1;
+ }
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 243717177f44..533c19348189 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -6734,9 +6734,12 @@ static u64 perf_virt_to_phys(u64 virt)
+ 		 * Try IRQ-safe __get_user_pages_fast first.
+ 		 * If failed, leave phys_addr as 0.
+ 		 */
+-		if ((current->mm != NULL) &&
+-		    (__get_user_pages_fast(virt, 1, 0, &p) == 1))
+-			phys_addr = page_to_phys(p) + virt % PAGE_SIZE;
++		if (current->mm != NULL) {
++			pagefault_disable();
++			if (__get_user_pages_fast(virt, 1, 0, &p) == 1)
++				phys_addr = page_to_phys(p) + virt % PAGE_SIZE;
++			pagefault_enable();
++		}
+ 
+ 		if (p)
+ 			put_page(p);
+diff --git a/kernel/gcov/fs.c b/kernel/gcov/fs.c
+index e5eb5ea7ea59..cc4ee482d3fb 100644
+--- a/kernel/gcov/fs.c
++++ b/kernel/gcov/fs.c
+@@ -108,9 +108,9 @@ static void *gcov_seq_next(struct seq_file *seq, void *data, loff_t *pos)
+ {
+ 	struct gcov_iterator *iter = data;
+ 
++	(*pos)++;
+ 	if (gcov_iter_next(iter))
+ 		return NULL;
+-	(*pos)++;
+ 
+ 	return iter;
+ }
+diff --git a/kernel/signal.c b/kernel/signal.c
+index e58a6c619824..7938c60e11dd 100644
+--- a/kernel/signal.c
++++ b/kernel/signal.c
+@@ -1993,8 +1993,12 @@ bool do_notify_parent(struct task_struct *tsk, int sig)
+ 		if (psig->action[SIGCHLD-1].sa.sa_handler == SIG_IGN)
+ 			sig = 0;
+ 	}
++	/*
++	 * Send with __send_signal as si_pid and si_uid are in the
++	 * parent's namespaces.
++	 */
+ 	if (valid_signal(sig) && sig)
+-		__group_send_sig_info(sig, &info, tsk->parent);
++		__send_signal(sig, &info, tsk->parent, PIDTYPE_TGID, false);
+ 	__wake_up_parent(tsk, tsk->parent);
+ 	spin_unlock_irqrestore(&psig->siglock, flags);
+ 
+diff --git a/lib/raid6/test/Makefile b/lib/raid6/test/Makefile
+index 3ab8720aa2f8..b9e6c3648be1 100644
+--- a/lib/raid6/test/Makefile
++++ b/lib/raid6/test/Makefile
+@@ -35,13 +35,13 @@ endif
+ ifeq ($(IS_X86),yes)
+         OBJS   += mmx.o sse1.o sse2.o avx2.o recov_ssse3.o recov_avx2.o avx512.o recov_avx512.o
+         CFLAGS += $(shell echo "pshufb %xmm0, %xmm0" |		\
+-                    gcc -c -x assembler - >&/dev/null &&	\
++                    gcc -c -x assembler - >/dev/null 2>&1 &&	\
+                     rm ./-.o && echo -DCONFIG_AS_SSSE3=1)
+         CFLAGS += $(shell echo "vpbroadcastb %xmm0, %ymm1" |	\
+-                    gcc -c -x assembler - >&/dev/null &&	\
++                    gcc -c -x assembler - >/dev/null 2>&1 &&	\
+                     rm ./-.o && echo -DCONFIG_AS_AVX2=1)
+ 	CFLAGS += $(shell echo "vpmovm2b %k1, %zmm5" |          \
+-		    gcc -c -x assembler - >&/dev/null &&        \
++		    gcc -c -x assembler - >/dev/null 2>&1 &&	\
+ 		    rm ./-.o && echo -DCONFIG_AS_AVX512=1)
+ else ifeq ($(HAS_NEON),yes)
+         OBJS   += neon.o neon1.o neon2.o neon4.o neon8.o recov_neon.o recov_neon_inner.o
+diff --git a/mm/hugetlb.c b/mm/hugetlb.c
+index dd8737a94bec..0366085f37ed 100644
+--- a/mm/hugetlb.c
++++ b/mm/hugetlb.c
+@@ -4910,8 +4910,8 @@ pte_t *huge_pte_offset(struct mm_struct *mm,
+ {
+ 	pgd_t *pgd;
+ 	p4d_t *p4d;
+-	pud_t *pud;
+-	pmd_t *pmd;
++	pud_t *pud, pud_entry;
++	pmd_t *pmd, pmd_entry;
+ 
+ 	pgd = pgd_offset(mm, addr);
+ 	if (!pgd_present(*pgd))
+@@ -4921,17 +4921,19 @@ pte_t *huge_pte_offset(struct mm_struct *mm,
+ 		return NULL;
+ 
+ 	pud = pud_offset(p4d, addr);
+-	if (sz != PUD_SIZE && pud_none(*pud))
++	pud_entry = READ_ONCE(*pud);
++	if (sz != PUD_SIZE && pud_none(pud_entry))
+ 		return NULL;
+ 	/* hugepage or swap? */
+-	if (pud_huge(*pud) || !pud_present(*pud))
++	if (pud_huge(pud_entry) || !pud_present(pud_entry))
+ 		return (pte_t *)pud;
+ 
+ 	pmd = pmd_offset(pud, addr);
+-	if (sz != PMD_SIZE && pmd_none(*pmd))
++	pmd_entry = READ_ONCE(*pmd);
++	if (sz != PMD_SIZE && pmd_none(pmd_entry))
+ 		return NULL;
+ 	/* hugepage or swap? */
+-	if (pmd_huge(*pmd) || !pmd_present(*pmd))
++	if (pmd_huge(pmd_entry) || !pmd_present(pmd_entry))
+ 		return (pte_t *)pmd;
+ 
+ 	return NULL;
+diff --git a/mm/ksm.c b/mm/ksm.c
+index d17c7d57d0d8..c55b89da4f55 100644
+--- a/mm/ksm.c
++++ b/mm/ksm.c
+@@ -2112,8 +2112,16 @@ static void cmp_and_merge_page(struct page *page, struct rmap_item *rmap_item)
+ 
+ 		down_read(&mm->mmap_sem);
+ 		vma = find_mergeable_vma(mm, rmap_item->address);
+-		err = try_to_merge_one_page(vma, page,
+-					    ZERO_PAGE(rmap_item->address));
++		if (vma) {
++			err = try_to_merge_one_page(vma, page,
++					ZERO_PAGE(rmap_item->address));
++		} else {
++			/*
++			 * If the vma is out of date, we do not need to
++			 * continue.
++			 */
++			err = 0;
++		}
+ 		up_read(&mm->mmap_sem);
+ 		/*
+ 		 * In case of failure, the page was not really empty, so we
+diff --git a/mm/madvise.c b/mm/madvise.c
+index 4bb30ed6c8d2..8cbd8c1bfe15 100644
+--- a/mm/madvise.c
++++ b/mm/madvise.c
+@@ -27,6 +27,7 @@
+ #include <linux/swapops.h>
+ #include <linux/shmem_fs.h>
+ #include <linux/mmu_notifier.h>
++#include <linux/sched/mm.h>
+ 
+ #include <asm/tlb.h>
+ 
+@@ -1090,6 +1091,23 @@ int do_madvise(unsigned long start, size_t len_in, int behavior)
+ 	if (write) {
+ 		if (down_write_killable(&current->mm->mmap_sem))
+ 			return -EINTR;
++
++		/*
++		 * We may have stolen the mm from another process
++		 * that is undergoing core dumping.
++		 *
++		 * Right now that's io_ring, in the future it may
++		 * be remote process management and not "current"
++		 * at all.
++		 *
++		 * We need to fix core dumping to not do this,
++		 * but for now we have the mmget_still_valid()
++		 * model.
++		 */
++		if (!mmget_still_valid(current->mm)) {
++			up_write(&current->mm->mmap_sem);
++			return -EINTR;
++		}
+ 	} else {
+ 		down_read(&current->mm->mmap_sem);
+ 	}
+diff --git a/mm/vmalloc.c b/mm/vmalloc.c
+index 6b8eeb0ecee5..cf39e15242c1 100644
+--- a/mm/vmalloc.c
++++ b/mm/vmalloc.c
+@@ -34,6 +34,7 @@
+ #include <linux/llist.h>
+ #include <linux/bitops.h>
+ #include <linux/rbtree_augmented.h>
++#include <linux/overflow.h>
+ 
+ #include <linux/uaccess.h>
+ #include <asm/tlbflush.h>
+@@ -3054,6 +3055,7 @@ finished:
+  * @vma:		vma to cover
+  * @uaddr:		target user address to start at
+  * @kaddr:		virtual address of vmalloc kernel memory
++ * @pgoff:		offset from @kaddr to start at
+  * @size:		size of map area
+  *
+  * Returns:	0 for success, -Exxx on failure
+@@ -3066,9 +3068,15 @@ finished:
+  * Similar to remap_pfn_range() (see mm/memory.c)
+  */
+ int remap_vmalloc_range_partial(struct vm_area_struct *vma, unsigned long uaddr,
+-				void *kaddr, unsigned long size)
++				void *kaddr, unsigned long pgoff,
++				unsigned long size)
+ {
+ 	struct vm_struct *area;
++	unsigned long off;
++	unsigned long end_index;
++
++	if (check_shl_overflow(pgoff, PAGE_SHIFT, &off))
++		return -EINVAL;
+ 
+ 	size = PAGE_ALIGN(size);
+ 
+@@ -3082,8 +3090,10 @@ int remap_vmalloc_range_partial(struct vm_area_struct *vma, unsigned long uaddr,
+ 	if (!(area->flags & (VM_USERMAP | VM_DMA_COHERENT)))
+ 		return -EINVAL;
+ 
+-	if (kaddr + size > area->addr + get_vm_area_size(area))
++	if (check_add_overflow(size, off, &end_index) ||
++	    end_index > get_vm_area_size(area))
+ 		return -EINVAL;
++	kaddr += off;
+ 
+ 	do {
+ 		struct page *page = vmalloc_to_page(kaddr);
+@@ -3122,7 +3132,7 @@ int remap_vmalloc_range(struct vm_area_struct *vma, void *addr,
+ 						unsigned long pgoff)
+ {
+ 	return remap_vmalloc_range_partial(vma, vma->vm_start,
+-					   addr + (pgoff << PAGE_SHIFT),
++					   addr, pgoff,
+ 					   vma->vm_end - vma->vm_start);
+ }
+ EXPORT_SYMBOL(remap_vmalloc_range);
+diff --git a/net/ipv4/fib_semantics.c b/net/ipv4/fib_semantics.c
+index a803cdd9400a..ee0f3b2823e0 100644
+--- a/net/ipv4/fib_semantics.c
++++ b/net/ipv4/fib_semantics.c
+@@ -2012,7 +2012,7 @@ static void fib_select_default(const struct flowi4 *flp, struct fib_result *res)
+ 
+ 	hlist_for_each_entry_rcu(fa, fa_head, fa_list) {
+ 		struct fib_info *next_fi = fa->fa_info;
+-		struct fib_nh *nh;
++		struct fib_nh_common *nhc;
+ 
+ 		if (fa->fa_slen != slen)
+ 			continue;
+@@ -2035,8 +2035,8 @@ static void fib_select_default(const struct flowi4 *flp, struct fib_result *res)
+ 		    fa->fa_type != RTN_UNICAST)
+ 			continue;
+ 
+-		nh = fib_info_nh(next_fi, 0);
+-		if (!nh->fib_nh_gw4 || nh->fib_nh_scope != RT_SCOPE_LINK)
++		nhc = fib_info_nhc(next_fi, 0);
++		if (!nhc->nhc_gw_family || nhc->nhc_scope != RT_SCOPE_LINK)
+ 			continue;
+ 
+ 		fib_alias_accessed(fa);
+diff --git a/net/ipv4/xfrm4_output.c b/net/ipv4/xfrm4_output.c
+index 89ba7c87de5d..30ddb9dc9398 100644
+--- a/net/ipv4/xfrm4_output.c
++++ b/net/ipv4/xfrm4_output.c
+@@ -58,9 +58,7 @@ int xfrm4_output_finish(struct sock *sk, struct sk_buff *skb)
+ {
+ 	memset(IPCB(skb), 0, sizeof(*IPCB(skb)));
+ 
+-#ifdef CONFIG_NETFILTER
+ 	IPCB(skb)->flags |= IPSKB_XFRM_TRANSFORMED;
+-#endif
+ 
+ 	return xfrm_output(sk, skb);
+ }
+diff --git a/net/ipv6/ipv6_sockglue.c b/net/ipv6/ipv6_sockglue.c
+index debdaeba5d8c..18d05403d3b5 100644
+--- a/net/ipv6/ipv6_sockglue.c
++++ b/net/ipv6/ipv6_sockglue.c
+@@ -183,15 +183,14 @@ static int do_ipv6_setsockopt(struct sock *sk, int level, int optname,
+ 					retv = -EBUSY;
+ 					break;
+ 				}
+-			} else if (sk->sk_protocol == IPPROTO_TCP) {
+-				if (sk->sk_prot != &tcpv6_prot) {
+-					retv = -EBUSY;
+-					break;
+-				}
+-				break;
+-			} else {
++			}
++			if (sk->sk_protocol == IPPROTO_TCP &&
++			    sk->sk_prot != &tcpv6_prot) {
++				retv = -EBUSY;
+ 				break;
+ 			}
++			if (sk->sk_protocol != IPPROTO_TCP)
++				break;
+ 			if (sk->sk_state != TCP_ESTABLISHED) {
+ 				retv = -ENOTCONN;
+ 				break;
+diff --git a/net/ipv6/xfrm6_output.c b/net/ipv6/xfrm6_output.c
+index fbe51d40bd7e..e34167f790e6 100644
+--- a/net/ipv6/xfrm6_output.c
++++ b/net/ipv6/xfrm6_output.c
+@@ -111,9 +111,7 @@ int xfrm6_output_finish(struct sock *sk, struct sk_buff *skb)
+ {
+ 	memset(IP6CB(skb), 0, sizeof(*IP6CB(skb)));
+ 
+-#ifdef CONFIG_NETFILTER
+ 	IP6CB(skb)->flags |= IP6SKB_XFRM_TRANSFORMED;
+-#endif
+ 
+ 	return xfrm_output(sk, skb);
+ }
+diff --git a/net/mac80211/main.c b/net/mac80211/main.c
+index a14aef11ffb8..4945d6e6d133 100644
+--- a/net/mac80211/main.c
++++ b/net/mac80211/main.c
+@@ -1161,8 +1161,6 @@ int ieee80211_register_hw(struct ieee80211_hw *hw)
+ 	local->tx_headroom = max_t(unsigned int , local->hw.extra_tx_headroom,
+ 				   IEEE80211_TX_STATUS_HEADROOM);
+ 
+-	debugfs_hw_add(local);
+-
+ 	/*
+ 	 * if the driver doesn't specify a max listen interval we
+ 	 * use 5 which should be a safe default
+@@ -1254,6 +1252,9 @@ int ieee80211_register_hw(struct ieee80211_hw *hw)
+ 	if (result < 0)
+ 		goto fail_wiphy_register;
+ 
++	debugfs_hw_add(local);
++	rate_control_add_debugfs(local);
++
+ 	rtnl_lock();
+ 
+ 	/* add one default STA interface if supported */
+diff --git a/net/mac80211/rate.c b/net/mac80211/rate.c
+index a1e9fc7878aa..b051f125d3af 100644
+--- a/net/mac80211/rate.c
++++ b/net/mac80211/rate.c
+@@ -214,17 +214,16 @@ static ssize_t rcname_read(struct file *file, char __user *userbuf,
+ 				       ref->ops->name, len);
+ }
+ 
+-static const struct file_operations rcname_ops = {
++const struct file_operations rcname_ops = {
+ 	.read = rcname_read,
+ 	.open = simple_open,
+ 	.llseek = default_llseek,
+ };
+ #endif
+ 
+-static struct rate_control_ref *rate_control_alloc(const char *name,
+-					    struct ieee80211_local *local)
++static struct rate_control_ref *
++rate_control_alloc(const char *name, struct ieee80211_local *local)
+ {
+-	struct dentry *debugfsdir = NULL;
+ 	struct rate_control_ref *ref;
+ 
+ 	ref = kmalloc(sizeof(struct rate_control_ref), GFP_KERNEL);
+@@ -234,13 +233,7 @@ static struct rate_control_ref *rate_control_alloc(const char *name,
+ 	if (!ref->ops)
+ 		goto free;
+ 
+-#ifdef CONFIG_MAC80211_DEBUGFS
+-	debugfsdir = debugfs_create_dir("rc", local->hw.wiphy->debugfsdir);
+-	local->debugfs.rcdir = debugfsdir;
+-	debugfs_create_file("name", 0400, debugfsdir, ref, &rcname_ops);
+-#endif
+-
+-	ref->priv = ref->ops->alloc(&local->hw, debugfsdir);
++	ref->priv = ref->ops->alloc(&local->hw);
+ 	if (!ref->priv)
+ 		goto free;
+ 	return ref;
+diff --git a/net/mac80211/rate.h b/net/mac80211/rate.h
+index 5397c6dad056..79b44d3db171 100644
+--- a/net/mac80211/rate.h
++++ b/net/mac80211/rate.h
+@@ -60,6 +60,29 @@ static inline void rate_control_add_sta_debugfs(struct sta_info *sta)
+ #endif
+ }
+ 
++extern const struct file_operations rcname_ops;
++
++static inline void rate_control_add_debugfs(struct ieee80211_local *local)
++{
++#ifdef CONFIG_MAC80211_DEBUGFS
++	struct dentry *debugfsdir;
++
++	if (!local->rate_ctrl)
++		return;
++
++	if (!local->rate_ctrl->ops->add_debugfs)
++		return;
++
++	debugfsdir = debugfs_create_dir("rc", local->hw.wiphy->debugfsdir);
++	local->debugfs.rcdir = debugfsdir;
++	debugfs_create_file("name", 0400, debugfsdir,
++			    local->rate_ctrl, &rcname_ops);
++
++	local->rate_ctrl->ops->add_debugfs(&local->hw, local->rate_ctrl->priv,
++					   debugfsdir);
++#endif
++}
++
+ void ieee80211_check_rate_mask(struct ieee80211_sub_if_data *sdata);
+ 
+ /* Get a reference to the rate control algorithm. If `name' is NULL, get the
+diff --git a/net/mac80211/rc80211_minstrel_ht.c b/net/mac80211/rc80211_minstrel_ht.c
+index 694a31978a04..5dc3e5bc4e64 100644
+--- a/net/mac80211/rc80211_minstrel_ht.c
++++ b/net/mac80211/rc80211_minstrel_ht.c
+@@ -1635,7 +1635,7 @@ minstrel_ht_init_cck_rates(struct minstrel_priv *mp)
+ }
+ 
+ static void *
+-minstrel_ht_alloc(struct ieee80211_hw *hw, struct dentry *debugfsdir)
++minstrel_ht_alloc(struct ieee80211_hw *hw)
+ {
+ 	struct minstrel_priv *mp;
+ 
+@@ -1673,7 +1673,17 @@ minstrel_ht_alloc(struct ieee80211_hw *hw, struct dentry *debugfsdir)
+ 	mp->update_interval = HZ / 10;
+ 	mp->new_avg = true;
+ 
++	minstrel_ht_init_cck_rates(mp);
++
++	return mp;
++}
++
+ #ifdef CONFIG_MAC80211_DEBUGFS
++static void minstrel_ht_add_debugfs(struct ieee80211_hw *hw, void *priv,
++				    struct dentry *debugfsdir)
++{
++	struct minstrel_priv *mp = priv;
++
+ 	mp->fixed_rate_idx = (u32) -1;
+ 	debugfs_create_u32("fixed_rate_idx", S_IRUGO | S_IWUGO, debugfsdir,
+ 			   &mp->fixed_rate_idx);
+@@ -1681,12 +1691,8 @@ minstrel_ht_alloc(struct ieee80211_hw *hw, struct dentry *debugfsdir)
+ 			   &mp->sample_switch);
+ 	debugfs_create_bool("new_avg", S_IRUGO | S_IWUSR, debugfsdir,
+ 			   &mp->new_avg);
+-#endif
+-
+-	minstrel_ht_init_cck_rates(mp);
+-
+-	return mp;
+ }
++#endif
+ 
+ static void
+ minstrel_ht_free(void *priv)
+@@ -1725,6 +1731,7 @@ static const struct rate_control_ops mac80211_minstrel_ht = {
+ 	.alloc = minstrel_ht_alloc,
+ 	.free = minstrel_ht_free,
+ #ifdef CONFIG_MAC80211_DEBUGFS
++	.add_debugfs = minstrel_ht_add_debugfs,
+ 	.add_sta_debugfs = minstrel_ht_add_sta_debugfs,
+ #endif
+ 	.get_expected_throughput = minstrel_ht_get_expected_throughput,
+diff --git a/net/netrom/nr_route.c b/net/netrom/nr_route.c
+index d41335bad1f8..89cd9de21594 100644
+--- a/net/netrom/nr_route.c
++++ b/net/netrom/nr_route.c
+@@ -208,6 +208,7 @@ static int __must_check nr_add_node(ax25_address *nr, const char *mnemonic,
+ 		/* refcount initialized at 1 */
+ 		spin_unlock_bh(&nr_node_list_lock);
+ 
++		nr_neigh_put(nr_neigh);
+ 		return 0;
+ 	}
+ 	nr_node_lock(nr_node);
+diff --git a/net/openvswitch/conntrack.c b/net/openvswitch/conntrack.c
+index e726159cfcfa..4340f25fe390 100644
+--- a/net/openvswitch/conntrack.c
++++ b/net/openvswitch/conntrack.c
+@@ -1895,7 +1895,8 @@ static void ovs_ct_limit_exit(struct net *net, struct ovs_net *ovs_net)
+ 		struct hlist_head *head = &info->limits[i];
+ 		struct ovs_ct_limit *ct_limit;
+ 
+-		hlist_for_each_entry_rcu(ct_limit, head, hlist_node)
++		hlist_for_each_entry_rcu(ct_limit, head, hlist_node,
++					 lockdep_ovsl_is_held())
+ 			kfree_rcu(ct_limit, rcu);
+ 	}
+ 	kfree(ovs_net->ct_limit_info->limits);
+diff --git a/net/openvswitch/datapath.c b/net/openvswitch/datapath.c
+index 07a7dd185995..c39f3c6c061d 100644
+--- a/net/openvswitch/datapath.c
++++ b/net/openvswitch/datapath.c
+@@ -2466,8 +2466,10 @@ static void __net_exit ovs_exit_net(struct net *dnet)
+ 	struct net *net;
+ 	LIST_HEAD(head);
+ 
+-	ovs_ct_exit(dnet);
+ 	ovs_lock();
++
++	ovs_ct_exit(dnet);
++
+ 	list_for_each_entry_safe(dp, dp_next, &ovs_net->dps, list_node)
+ 		__dp_destroy(dp);
+ 
+diff --git a/net/sched/sch_etf.c b/net/sched/sch_etf.c
+index b1da5589a0c6..c48f91075b5c 100644
+--- a/net/sched/sch_etf.c
++++ b/net/sched/sch_etf.c
+@@ -82,7 +82,7 @@ static bool is_packet_valid(struct Qdisc *sch, struct sk_buff *nskb)
+ 	if (q->skip_sock_check)
+ 		goto skip;
+ 
+-	if (!sk)
++	if (!sk || !sk_fullsock(sk))
+ 		return false;
+ 
+ 	if (!sock_flag(sk, SOCK_TXTIME))
+@@ -137,8 +137,9 @@ static void report_sock_error(struct sk_buff *skb, u32 err, u8 code)
+ 	struct sock_exterr_skb *serr;
+ 	struct sk_buff *clone;
+ 	ktime_t txtime = skb->tstamp;
++	struct sock *sk = skb->sk;
+ 
+-	if (!skb->sk || !(skb->sk->sk_txtime_report_errors))
++	if (!sk || !sk_fullsock(sk) || !(sk->sk_txtime_report_errors))
+ 		return;
+ 
+ 	clone = skb_clone(skb, GFP_ATOMIC);
+@@ -154,7 +155,7 @@ static void report_sock_error(struct sk_buff *skb, u32 err, u8 code)
+ 	serr->ee.ee_data = (txtime >> 32); /* high part of tstamp */
+ 	serr->ee.ee_info = txtime; /* low part of tstamp */
+ 
+-	if (sock_queue_err_skb(skb->sk, clone))
++	if (sock_queue_err_skb(sk, clone))
+ 		kfree_skb(clone);
+ }
+ 
+diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c
+index de3c077733a7..298557744818 100644
+--- a/net/sunrpc/svc_xprt.c
++++ b/net/sunrpc/svc_xprt.c
+@@ -1028,6 +1028,8 @@ static void svc_delete_xprt(struct svc_xprt *xprt)
+ 
+ 	dprintk("svc: svc_delete_xprt(%p)\n", xprt);
+ 	xprt->xpt_ops->xpo_detach(xprt);
++	if (xprt->xpt_bc_xprt)
++		xprt->xpt_bc_xprt->ops->close(xprt->xpt_bc_xprt);
+ 
+ 	spin_lock_bh(&serv->sv_lock);
+ 	list_del_init(&xprt->xpt_list);
+diff --git a/net/sunrpc/xprtrdma/svc_rdma_backchannel.c b/net/sunrpc/xprtrdma/svc_rdma_backchannel.c
+index 908e78bb87c6..cf80394b2db3 100644
+--- a/net/sunrpc/xprtrdma/svc_rdma_backchannel.c
++++ b/net/sunrpc/xprtrdma/svc_rdma_backchannel.c
+@@ -242,6 +242,8 @@ static void
+ xprt_rdma_bc_close(struct rpc_xprt *xprt)
+ {
+ 	dprintk("svcrdma: %s: xprt %p\n", __func__, xprt);
++
++	xprt_disconnect_done(xprt);
+ 	xprt->cwnd = RPC_CWNDSHIFT;
+ }
+ 
+diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c
+index d86c664ea6af..882f46fadd01 100644
+--- a/net/sunrpc/xprtsock.c
++++ b/net/sunrpc/xprtsock.c
+@@ -2714,6 +2714,7 @@ static int bc_send_request(struct rpc_rqst *req)
+ 
+ static void bc_close(struct rpc_xprt *xprt)
+ {
++	xprt_disconnect_done(xprt);
+ }
+ 
+ /*
+diff --git a/net/tipc/crypto.c b/net/tipc/crypto.c
+index c8c47fc72653..8c47ded2edb6 100644
+--- a/net/tipc/crypto.c
++++ b/net/tipc/crypto.c
+@@ -1712,6 +1712,7 @@ exit:
+ 	case -EBUSY:
+ 		this_cpu_inc(stats->stat[STAT_ASYNC]);
+ 		*skb = NULL;
++		tipc_aead_put(aead);
+ 		return rc;
+ 	default:
+ 		this_cpu_inc(stats->stat[STAT_NOK]);
+diff --git a/net/tipc/node.c b/net/tipc/node.c
+index 0c88778c88b5..d50be9a3d479 100644
+--- a/net/tipc/node.c
++++ b/net/tipc/node.c
+@@ -2037,6 +2037,7 @@ void tipc_rcv(struct net *net, struct sk_buff *skb, struct tipc_bearer *b)
+ 		n = tipc_node_find_by_id(net, ehdr->id);
+ 	}
+ 	tipc_crypto_rcv(net, (n) ? n->crypto_rx : NULL, &skb, b);
++	tipc_node_put(n);
+ 	if (!skb)
+ 		return;
+ 
+@@ -2089,7 +2090,7 @@ rcv:
+ 	/* Check/update node state before receiving */
+ 	if (unlikely(skb)) {
+ 		if (unlikely(skb_linearize(skb)))
+-			goto discard;
++			goto out_node_put;
+ 		tipc_node_write_lock(n);
+ 		if (tipc_node_check_state(n, skb, bearer_id, &xmitq)) {
+ 			if (le->link) {
+@@ -2118,6 +2119,7 @@ rcv:
+ 	if (!skb_queue_empty(&xmitq))
+ 		tipc_bearer_xmit(net, bearer_id, &xmitq, &le->maddr, n);
+ 
++out_node_put:
+ 	tipc_node_put(n);
+ discard:
+ 	kfree_skb(skb);
+diff --git a/net/x25/x25_dev.c b/net/x25/x25_dev.c
+index 00e782335cb0..25bf72ee6cad 100644
+--- a/net/x25/x25_dev.c
++++ b/net/x25/x25_dev.c
+@@ -115,8 +115,10 @@ int x25_lapb_receive_frame(struct sk_buff *skb, struct net_device *dev,
+ 		goto drop;
+ 	}
+ 
+-	if (!pskb_may_pull(skb, 1))
++	if (!pskb_may_pull(skb, 1)) {
++		x25_neigh_put(nb);
+ 		return 0;
++	}
+ 
+ 	switch (skb->data[0]) {
+ 
+diff --git a/samples/vfio-mdev/mdpy.c b/samples/vfio-mdev/mdpy.c
+index cc86bf6566e4..9894693f3be1 100644
+--- a/samples/vfio-mdev/mdpy.c
++++ b/samples/vfio-mdev/mdpy.c
+@@ -418,7 +418,7 @@ static int mdpy_mmap(struct mdev_device *mdev, struct vm_area_struct *vma)
+ 		return -EINVAL;
+ 
+ 	return remap_vmalloc_range_partial(vma, vma->vm_start,
+-					   mdev_state->memblk,
++					   mdev_state->memblk, 0,
+ 					   vma->vm_end - vma->vm_start);
+ }
+ 
+diff --git a/scripts/kconfig/qconf.cc b/scripts/kconfig/qconf.cc
+index 82773cc35d35..0f8c77f84711 100644
+--- a/scripts/kconfig/qconf.cc
++++ b/scripts/kconfig/qconf.cc
+@@ -627,7 +627,7 @@ void ConfigList::updateMenuList(ConfigItem *parent, struct menu* menu)
+ 			last = item;
+ 			continue;
+ 		}
+-	hide:
++hide:
+ 		if (item && item->menu == child) {
+ 			last = parent->firstChild();
+ 			if (last == item)
+@@ -692,7 +692,7 @@ void ConfigList::updateMenuList(ConfigList *parent, struct menu* menu)
+ 			last = item;
+ 			continue;
+ 		}
+-	hide:
++hide:
+ 		if (item && item->menu == child) {
+ 			last = (ConfigItem*)parent->topLevelItem(0);
+ 			if (last == item)
+@@ -1225,10 +1225,11 @@ QMenu* ConfigInfoView::createStandardContextMenu(const QPoint & pos)
+ {
+ 	QMenu* popup = Parent::createStandardContextMenu(pos);
+ 	QAction* action = new QAction("Show Debug Info", popup);
+-	  action->setCheckable(true);
+-	  connect(action, SIGNAL(toggled(bool)), SLOT(setShowDebug(bool)));
+-	  connect(this, SIGNAL(showDebugChanged(bool)), action, SLOT(setOn(bool)));
+-	  action->setChecked(showDebug());
++
++	action->setCheckable(true);
++	connect(action, SIGNAL(toggled(bool)), SLOT(setShowDebug(bool)));
++	connect(this, SIGNAL(showDebugChanged(bool)), action, SLOT(setOn(bool)));
++	action->setChecked(showDebug());
+ 	popup->addSeparator();
+ 	popup->addAction(action);
+ 	return popup;
+diff --git a/security/keys/internal.h b/security/keys/internal.h
+index ba3e2da14cef..6d0ca48ae9a5 100644
+--- a/security/keys/internal.h
++++ b/security/keys/internal.h
+@@ -16,6 +16,8 @@
+ #include <linux/keyctl.h>
+ #include <linux/refcount.h>
+ #include <linux/compat.h>
++#include <linux/mm.h>
++#include <linux/vmalloc.h>
+ 
+ struct iovec;
+ 
+@@ -349,4 +351,14 @@ static inline void key_check(const struct key *key)
+ 
+ #endif
+ 
++/*
++ * Helper function to clear and free a kvmalloc'ed memory object.
++ */
++static inline void __kvzfree(const void *addr, size_t len)
++{
++	if (addr) {
++		memset((void *)addr, 0, len);
++		kvfree(addr);
++	}
++}
+ #endif /* _INTERNAL_H */
+diff --git a/security/keys/keyctl.c b/security/keys/keyctl.c
+index 106e16f9006b..5e01192e222a 100644
+--- a/security/keys/keyctl.c
++++ b/security/keys/keyctl.c
+@@ -339,7 +339,7 @@ long keyctl_update_key(key_serial_t id,
+ 	payload = NULL;
+ 	if (plen) {
+ 		ret = -ENOMEM;
+-		payload = kmalloc(plen, GFP_KERNEL);
++		payload = kvmalloc(plen, GFP_KERNEL);
+ 		if (!payload)
+ 			goto error;
+ 
+@@ -360,7 +360,7 @@ long keyctl_update_key(key_serial_t id,
+ 
+ 	key_ref_put(key_ref);
+ error2:
+-	kzfree(payload);
++	__kvzfree(payload, plen);
+ error:
+ 	return ret;
+ }
+@@ -827,7 +827,8 @@ long keyctl_read_key(key_serial_t keyid, char __user *buffer, size_t buflen)
+ 	struct key *key;
+ 	key_ref_t key_ref;
+ 	long ret;
+-	char *key_data;
++	char *key_data = NULL;
++	size_t key_data_len;
+ 
+ 	/* find the key first */
+ 	key_ref = lookup_user_key(keyid, 0, 0);
+@@ -878,24 +879,51 @@ can_read_key:
+ 	 * Allocating a temporary buffer to hold the keys before
+ 	 * transferring them to user buffer to avoid potential
+ 	 * deadlock involving page fault and mmap_sem.
++	 *
++	 * key_data_len = (buflen <= PAGE_SIZE)
++	 *		? buflen : actual length of key data
++	 *
++	 * This prevents allocating arbitrary large buffer which can
++	 * be much larger than the actual key length. In the latter case,
++	 * at least 2 passes of this loop is required.
+ 	 */
+-	key_data = kmalloc(buflen, GFP_KERNEL);
++	key_data_len = (buflen <= PAGE_SIZE) ? buflen : 0;
++	for (;;) {
++		if (key_data_len) {
++			key_data = kvmalloc(key_data_len, GFP_KERNEL);
++			if (!key_data) {
++				ret = -ENOMEM;
++				goto key_put_out;
++			}
++		}
+ 
+-	if (!key_data) {
+-		ret = -ENOMEM;
+-		goto key_put_out;
+-	}
+-	ret = __keyctl_read_key(key, key_data, buflen);
++		ret = __keyctl_read_key(key, key_data, key_data_len);
++
++		/*
++		 * Read methods will just return the required length without
++		 * any copying if the provided length isn't large enough.
++		 */
++		if (ret <= 0 || ret > buflen)
++			break;
++
++		/*
++		 * The key may change (unlikely) in between 2 consecutive
++		 * __keyctl_read_key() calls. In this case, we reallocate
++		 * a larger buffer and redo the key read when
++		 * key_data_len < ret <= buflen.
++		 */
++		if (ret > key_data_len) {
++			if (unlikely(key_data))
++				__kvzfree(key_data, key_data_len);
++			key_data_len = ret;
++			continue;	/* Allocate buffer */
++		}
+ 
+-	/*
+-	 * Read methods will just return the required length without
+-	 * any copying if the provided length isn't large enough.
+-	 */
+-	if (ret > 0 && ret <= buflen) {
+ 		if (copy_to_user(buffer, key_data, ret))
+ 			ret = -EFAULT;
++		break;
+ 	}
+-	kzfree(key_data);
++	__kvzfree(key_data, key_data_len);
+ 
+ key_put_out:
+ 	key_put(key);
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index f41d8b7864c1..af21e9583c0d 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -2076,7 +2076,6 @@ static void pcm_mmap_prepare(struct snd_pcm_substream *substream,
+  * should be ignored from the beginning.
+  */
+ static const struct snd_pci_quirk driver_blacklist[] = {
+-	SND_PCI_QUIRK(0x1043, 0x874f, "ASUS ROG Zenith II / Strix", 0),
+ 	SND_PCI_QUIRK(0x1462, 0xcb59, "MSI TRX40 Creator", 0),
+ 	SND_PCI_QUIRK(0x1462, 0xcb60, "MSI TRX40", 0),
+ 	{}
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index 5119a9ae3d8a..8bc4d66ff986 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -38,6 +38,10 @@ static bool static_hdmi_pcm;
+ module_param(static_hdmi_pcm, bool, 0644);
+ MODULE_PARM_DESC(static_hdmi_pcm, "Don't restrict PCM parameters per ELD info");
+ 
++static bool enable_acomp = true;
++module_param(enable_acomp, bool, 0444);
++MODULE_PARM_DESC(enable_acomp, "Enable audio component binding (default=yes)");
++
+ struct hdmi_spec_per_cvt {
+ 	hda_nid_t cvt_nid;
+ 	int assigned;
+@@ -2638,6 +2642,11 @@ static void generic_acomp_init(struct hda_codec *codec,
+ {
+ 	struct hdmi_spec *spec = codec->spec;
+ 
++	if (!enable_acomp) {
++		codec_info(codec, "audio component disabled by module option\n");
++		return;
++	}
++
+ 	spec->port2pin = port2pin;
+ 	setup_drm_audio_ops(codec, ops);
+ 	if (!snd_hdac_acomp_init(&codec->bus->core, &spec->drm_audio_ops,
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 02b9830d4b5f..f2fccf267b48 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -369,6 +369,7 @@ static void alc_fill_eapd_coef(struct hda_codec *codec)
+ 	case 0x10ec0233:
+ 	case 0x10ec0235:
+ 	case 0x10ec0236:
++	case 0x10ec0245:
+ 	case 0x10ec0255:
+ 	case 0x10ec0256:
+ 	case 0x10ec0257:
+@@ -789,9 +790,11 @@ static void alc_ssid_check(struct hda_codec *codec, const hda_nid_t *ports)
+ {
+ 	if (!alc_subsystem_id(codec, ports)) {
+ 		struct alc_spec *spec = codec->spec;
+-		codec_dbg(codec,
+-			  "realtek: Enable default setup for auto mode as fallback\n");
+-		spec->init_amp = ALC_INIT_DEFAULT;
++		if (spec->init_amp == ALC_INIT_UNDEFINED) {
++			codec_dbg(codec,
++				  "realtek: Enable default setup for auto mode as fallback\n");
++			spec->init_amp = ALC_INIT_DEFAULT;
++		}
+ 	}
+ }
+ 
+@@ -8071,6 +8074,7 @@ static int patch_alc269(struct hda_codec *codec)
+ 		spec->gen.mixer_nid = 0;
+ 		break;
+ 	case 0x10ec0215:
++	case 0x10ec0245:
+ 	case 0x10ec0285:
+ 	case 0x10ec0289:
+ 		spec->codec_variant = ALC269_TYPE_ALC215;
+@@ -9332,6 +9336,7 @@ static const struct hda_device_id snd_hda_id_realtek[] = {
+ 	HDA_CODEC_ENTRY(0x10ec0234, "ALC234", patch_alc269),
+ 	HDA_CODEC_ENTRY(0x10ec0235, "ALC233", patch_alc269),
+ 	HDA_CODEC_ENTRY(0x10ec0236, "ALC236", patch_alc269),
++	HDA_CODEC_ENTRY(0x10ec0245, "ALC245", patch_alc269),
+ 	HDA_CODEC_ENTRY(0x10ec0255, "ALC255", patch_alc269),
+ 	HDA_CODEC_ENTRY(0x10ec0256, "ALC256", patch_alc269),
+ 	HDA_CODEC_ENTRY(0x10ec0257, "ALC257", patch_alc269),
+diff --git a/sound/soc/intel/atom/sst-atom-controls.c b/sound/soc/intel/atom/sst-atom-controls.c
+index f883c9340eee..df8f7994d3b7 100644
+--- a/sound/soc/intel/atom/sst-atom-controls.c
++++ b/sound/soc/intel/atom/sst-atom-controls.c
+@@ -966,7 +966,9 @@ static int sst_set_be_modules(struct snd_soc_dapm_widget *w,
+ 	dev_dbg(c->dev, "Enter: widget=%s\n", w->name);
+ 
+ 	if (SND_SOC_DAPM_EVENT_ON(event)) {
++		mutex_lock(&drv->lock);
+ 		ret = sst_send_slot_map(drv);
++		mutex_unlock(&drv->lock);
+ 		if (ret)
+ 			return ret;
+ 		ret = sst_send_pipe_module_params(w, k);
+diff --git a/sound/soc/intel/boards/bytcr_rt5640.c b/sound/soc/intel/boards/bytcr_rt5640.c
+index 6bd9ae813be2..d14d5f7db168 100644
+--- a/sound/soc/intel/boards/bytcr_rt5640.c
++++ b/sound/soc/intel/boards/bytcr_rt5640.c
+@@ -591,6 +591,17 @@ static const struct dmi_system_id byt_rt5640_quirk_table[] = {
+ 					BYT_RT5640_SSP0_AIF1 |
+ 					BYT_RT5640_MCLK_EN),
+ 	},
++	{
++		/* MPMAN MPWIN895CL */
++		.matches = {
++			DMI_EXACT_MATCH(DMI_SYS_VENDOR, "MPMAN"),
++			DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "MPWIN8900CL"),
++		},
++		.driver_data = (void *)(BYTCR_INPUT_DEFAULTS |
++					BYT_RT5640_MONO_SPEAKER |
++					BYT_RT5640_SSP0_AIF1 |
++					BYT_RT5640_MCLK_EN),
++	},
+ 	{	/* MSI S100 tablet */
+ 		.matches = {
+ 			DMI_EXACT_MATCH(DMI_SYS_VENDOR, "Micro-Star International Co., Ltd."),
+diff --git a/sound/soc/qcom/qdsp6/q6asm-dai.c b/sound/soc/qcom/qdsp6/q6asm-dai.c
+index c0d422d0ab94..d7dc80ede892 100644
+--- a/sound/soc/qcom/qdsp6/q6asm-dai.c
++++ b/sound/soc/qcom/qdsp6/q6asm-dai.c
+@@ -73,7 +73,7 @@ struct q6asm_dai_data {
+ };
+ 
+ static const struct snd_pcm_hardware q6asm_dai_hardware_capture = {
+-	.info =                 (SNDRV_PCM_INFO_MMAP |
++	.info =                 (SNDRV_PCM_INFO_MMAP | SNDRV_PCM_INFO_BATCH |
+ 				SNDRV_PCM_INFO_BLOCK_TRANSFER |
+ 				SNDRV_PCM_INFO_MMAP_VALID |
+ 				SNDRV_PCM_INFO_INTERLEAVED |
+@@ -95,7 +95,7 @@ static const struct snd_pcm_hardware q6asm_dai_hardware_capture = {
+ };
+ 
+ static struct snd_pcm_hardware q6asm_dai_hardware_playback = {
+-	.info =                 (SNDRV_PCM_INFO_MMAP |
++	.info =                 (SNDRV_PCM_INFO_MMAP | SNDRV_PCM_INFO_BATCH |
+ 				SNDRV_PCM_INFO_BLOCK_TRANSFER |
+ 				SNDRV_PCM_INFO_MMAP_VALID |
+ 				SNDRV_PCM_INFO_INTERLEAVED |
+diff --git a/sound/soc/soc-dapm.c b/sound/soc/soc-dapm.c
+index 17962564866d..c8fd65318d5e 100644
+--- a/sound/soc/soc-dapm.c
++++ b/sound/soc/soc-dapm.c
+@@ -423,7 +423,7 @@ static int dapm_kcontrol_data_alloc(struct snd_soc_dapm_widget *widget,
+ 
+ 			memset(&template, 0, sizeof(template));
+ 			template.reg = e->reg;
+-			template.mask = e->mask << e->shift_l;
++			template.mask = e->mask;
+ 			template.shift = e->shift_l;
+ 			template.off_val = snd_soc_enum_item_to_val(e, 0);
+ 			template.on_val = template.off_val;
+@@ -546,8 +546,22 @@ static bool dapm_kcontrol_set_value(const struct snd_kcontrol *kcontrol,
+ 	if (data->value == value)
+ 		return false;
+ 
+-	if (data->widget)
+-		data->widget->on_val = value;
++	if (data->widget) {
++		switch (dapm_kcontrol_get_wlist(kcontrol)->widgets[0]->id) {
++		case snd_soc_dapm_switch:
++		case snd_soc_dapm_mixer:
++		case snd_soc_dapm_mixer_named_ctl:
++			data->widget->on_val = value & data->widget->mask;
++			break;
++		case snd_soc_dapm_demux:
++		case snd_soc_dapm_mux:
++			data->widget->on_val = value >> data->widget->shift;
++			break;
++		default:
++			data->widget->on_val = value;
++			break;
++		}
++	}
+ 
+ 	data->value = value;
+ 
+diff --git a/sound/usb/format.c b/sound/usb/format.c
+index 9f5cb4ed3a0c..928c8761a962 100644
+--- a/sound/usb/format.c
++++ b/sound/usb/format.c
+@@ -247,6 +247,52 @@ static int parse_audio_format_rates_v1(struct snd_usb_audio *chip, struct audiof
+ 	return 0;
+ }
+ 
++/*
++ * Many Focusrite devices supports a limited set of sampling rates per
++ * altsetting. Maximum rate is exposed in the last 4 bytes of Format Type
++ * descriptor which has a non-standard bLength = 10.
++ */
++static bool focusrite_valid_sample_rate(struct snd_usb_audio *chip,
++					struct audioformat *fp,
++					unsigned int rate)
++{
++	struct usb_interface *iface;
++	struct usb_host_interface *alts;
++	unsigned char *fmt;
++	unsigned int max_rate;
++
++	iface = usb_ifnum_to_if(chip->dev, fp->iface);
++	if (!iface)
++		return true;
++
++	alts = &iface->altsetting[fp->altset_idx];
++	fmt = snd_usb_find_csint_desc(alts->extra, alts->extralen,
++				      NULL, UAC_FORMAT_TYPE);
++	if (!fmt)
++		return true;
++
++	if (fmt[0] == 10) { /* bLength */
++		max_rate = combine_quad(&fmt[6]);
++
++		/* Validate max rate */
++		if (max_rate != 48000 &&
++		    max_rate != 96000 &&
++		    max_rate != 192000 &&
++		    max_rate != 384000) {
++
++			usb_audio_info(chip,
++				"%u:%d : unexpected max rate: %u\n",
++				fp->iface, fp->altsetting, max_rate);
++
++			return true;
++		}
++
++		return rate <= max_rate;
++	}
++
++	return true;
++}
++
+ /*
+  * Helper function to walk the array of sample rate triplets reported by
+  * the device. The problem is that we need to parse whole array first to
+@@ -283,6 +329,11 @@ static int parse_uac2_sample_rate_range(struct snd_usb_audio *chip,
+ 		}
+ 
+ 		for (rate = min; rate <= max; rate += res) {
++			/* Filter out invalid rates on Focusrite devices */
++			if (USB_ID_VENDOR(chip->usb_id) == 0x1235 &&
++			    !focusrite_valid_sample_rate(chip, fp, rate))
++				goto skip_rate;
++
+ 			if (fp->rate_table)
+ 				fp->rate_table[nr_rates] = rate;
+ 			if (!fp->rate_min || rate < fp->rate_min)
+@@ -297,6 +348,7 @@ static int parse_uac2_sample_rate_range(struct snd_usb_audio *chip,
+ 				break;
+ 			}
+ 
++skip_rate:
+ 			/* avoid endless loop */
+ 			if (res == 0)
+ 				break;
+diff --git a/sound/usb/mixer.c b/sound/usb/mixer.c
+index 7e2e1fc5b9f0..7a2961ad60de 100644
+--- a/sound/usb/mixer.c
++++ b/sound/usb/mixer.c
+@@ -1755,8 +1755,10 @@ static void build_connector_control(struct usb_mixer_interface *mixer,
+ {
+ 	struct snd_kcontrol *kctl;
+ 	struct usb_mixer_elem_info *cval;
++	const struct usbmix_name_map *map;
+ 
+-	if (check_ignored_ctl(find_map(imap, term->id, 0)))
++	map = find_map(imap, term->id, 0);
++	if (check_ignored_ctl(map))
+ 		return;
+ 
+ 	cval = kzalloc(sizeof(*cval), GFP_KERNEL);
+@@ -1788,8 +1790,12 @@ static void build_connector_control(struct usb_mixer_interface *mixer,
+ 		usb_mixer_elem_info_free(cval);
+ 		return;
+ 	}
+-	get_connector_control_name(mixer, term, is_input, kctl->id.name,
+-				   sizeof(kctl->id.name));
++
++	if (check_mapped_name(map, kctl->id.name, sizeof(kctl->id.name)))
++		strlcat(kctl->id.name, " Jack", sizeof(kctl->id.name));
++	else
++		get_connector_control_name(mixer, term, is_input, kctl->id.name,
++					   sizeof(kctl->id.name));
+ 	kctl->private_free = snd_usb_mixer_elem_free;
+ 	snd_usb_mixer_add_control(&cval->head, kctl);
+ }
+@@ -3090,6 +3096,7 @@ static int snd_usb_mixer_controls(struct usb_mixer_interface *mixer)
+ 		if (map->id == state.chip->usb_id) {
+ 			state.map = map->map;
+ 			state.selector_map = map->selector_map;
++			mixer->connector_map = map->connector_map;
+ 			mixer->ignore_ctl_error |= map->ignore_ctl_error;
+ 			break;
+ 		}
+@@ -3171,10 +3178,32 @@ static int snd_usb_mixer_controls(struct usb_mixer_interface *mixer)
+ 	return 0;
+ }
+ 
++static int delegate_notify(struct usb_mixer_interface *mixer, int unitid,
++			   u8 *control, u8 *channel)
++{
++	const struct usbmix_connector_map *map = mixer->connector_map;
++
++	if (!map)
++		return unitid;
++
++	for (; map->id; map++) {
++		if (map->id == unitid) {
++			if (control && map->control)
++				*control = map->control;
++			if (channel && map->channel)
++				*channel = map->channel;
++			return map->delegated_id;
++		}
++	}
++	return unitid;
++}
++
+ void snd_usb_mixer_notify_id(struct usb_mixer_interface *mixer, int unitid)
+ {
+ 	struct usb_mixer_elem_list *list;
+ 
++	unitid = delegate_notify(mixer, unitid, NULL, NULL);
++
+ 	for_each_mixer_elem(list, mixer, unitid) {
+ 		struct usb_mixer_elem_info *info =
+ 			mixer_elem_list_to_info(list);
+@@ -3244,6 +3273,8 @@ static void snd_usb_mixer_interrupt_v2(struct usb_mixer_interface *mixer,
+ 		return;
+ 	}
+ 
++	unitid = delegate_notify(mixer, unitid, &control, &channel);
++
+ 	for_each_mixer_elem(list, mixer, unitid)
+ 		count++;
+ 
+diff --git a/sound/usb/mixer.h b/sound/usb/mixer.h
+index 65d6d08c96f5..41ec9dc4139b 100644
+--- a/sound/usb/mixer.h
++++ b/sound/usb/mixer.h
+@@ -6,6 +6,13 @@
+ 
+ struct media_mixer_ctl;
+ 
++struct usbmix_connector_map {
++	u8 id;
++	u8 delegated_id;
++	u8 control;
++	u8 channel;
++};
++
+ struct usb_mixer_interface {
+ 	struct snd_usb_audio *chip;
+ 	struct usb_host_interface *hostif;
+@@ -18,6 +25,9 @@ struct usb_mixer_interface {
+ 	/* the usb audio specification version this interface complies to */
+ 	int protocol;
+ 
++	/* optional connector delegation map */
++	const struct usbmix_connector_map *connector_map;
++
+ 	/* Sound Blaster remote control stuff */
+ 	const struct rc_config *rc_cfg;
+ 	u32 rc_code;
+diff --git a/sound/usb/mixer_maps.c b/sound/usb/mixer_maps.c
+index b4e77000f441..0260c750e156 100644
+--- a/sound/usb/mixer_maps.c
++++ b/sound/usb/mixer_maps.c
+@@ -27,6 +27,7 @@ struct usbmix_ctl_map {
+ 	u32 id;
+ 	const struct usbmix_name_map *map;
+ 	const struct usbmix_selector_map *selector_map;
++	const struct usbmix_connector_map *connector_map;
+ 	int ignore_ctl_error;
+ };
+ 
+@@ -369,6 +370,33 @@ static const struct usbmix_name_map asus_rog_map[] = {
+ 	{}
+ };
+ 
++/* TRX40 mobos with Realtek ALC1220-VB */
++static const struct usbmix_name_map trx40_mobo_map[] = {
++	{ 18, NULL }, /* OT, IEC958 - broken response, disabled */
++	{ 19, NULL, 12 }, /* FU, Input Gain Pad - broken response, disabled */
++	{ 16, "Speaker" },		/* OT */
++	{ 22, "Speaker Playback" },	/* FU */
++	{ 7, "Line" },			/* IT */
++	{ 19, "Line Capture" },		/* FU */
++	{ 17, "Front Headphone" },	/* OT */
++	{ 23, "Front Headphone Playback" },	/* FU */
++	{ 8, "Mic" },			/* IT */
++	{ 20, "Mic Capture" },		/* FU */
++	{ 9, "Front Mic" },		/* IT */
++	{ 21, "Front Mic Capture" },	/* FU */
++	{ 24, "IEC958 Playback" },	/* FU */
++	{}
++};
++
++static const struct usbmix_connector_map trx40_mobo_connector_map[] = {
++	{ 10, 16 },	/* (Back) Speaker */
++	{ 11, 17 },	/* Front Headphone */
++	{ 13, 7 },	/* Line */
++	{ 14, 8 },	/* Mic */
++	{ 15, 9 },	/* Front Mic */
++	{}
++};
++
+ /*
+  * Control map entries
+  */
+@@ -500,7 +528,8 @@ static const struct usbmix_ctl_map usbmix_ctl_maps[] = {
+ 	},
+ 	{	/* Gigabyte TRX40 Aorus Pro WiFi */
+ 		.id = USB_ID(0x0414, 0xa002),
+-		.map = asus_rog_map,
++		.map = trx40_mobo_map,
++		.connector_map = trx40_mobo_connector_map,
+ 	},
+ 	{	/* ASUS ROG Zenith II */
+ 		.id = USB_ID(0x0b05, 0x1916),
+@@ -512,11 +541,13 @@ static const struct usbmix_ctl_map usbmix_ctl_maps[] = {
+ 	},
+ 	{	/* MSI TRX40 Creator */
+ 		.id = USB_ID(0x0db0, 0x0d64),
+-		.map = asus_rog_map,
++		.map = trx40_mobo_map,
++		.connector_map = trx40_mobo_connector_map,
+ 	},
+ 	{	/* MSI TRX40 */
+ 		.id = USB_ID(0x0db0, 0x543d),
+-		.map = asus_rog_map,
++		.map = trx40_mobo_map,
++		.connector_map = trx40_mobo_connector_map,
+ 	},
+ 	{ 0 } /* terminator */
+ };
+diff --git a/sound/usb/mixer_quirks.c b/sound/usb/mixer_quirks.c
+index c237e24f08d9..0f072426b84c 100644
+--- a/sound/usb/mixer_quirks.c
++++ b/sound/usb/mixer_quirks.c
+@@ -1508,11 +1508,15 @@ static int snd_microii_spdif_default_get(struct snd_kcontrol *kcontrol,
+ 
+ 	/* use known values for that card: interface#1 altsetting#1 */
+ 	iface = usb_ifnum_to_if(chip->dev, 1);
+-	if (!iface || iface->num_altsetting < 2)
+-		return -EINVAL;
++	if (!iface || iface->num_altsetting < 2) {
++		err = -EINVAL;
++		goto end;
++	}
+ 	alts = &iface->altsetting[1];
+-	if (get_iface_desc(alts)->bNumEndpoints < 1)
+-		return -EINVAL;
++	if (get_iface_desc(alts)->bNumEndpoints < 1) {
++		err = -EINVAL;
++		goto end;
++	}
+ 	ep = get_endpoint(alts, 0)->bEndpointAddress;
+ 
+ 	err = snd_usb_ctl_msg(chip->dev,
+diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h
+index d187aa6d50db..8c2f5c23e1b4 100644
+--- a/sound/usb/quirks-table.h
++++ b/sound/usb/quirks-table.h
+@@ -3592,5 +3592,61 @@ AU0828_DEVICE(0x2040, 0x7270, "Hauppauge", "HVR-950Q"),
+ 		}
+ 	}
+ },
++{
++	/*
++	 * Pioneer DJ DJM-250MK2
++	 * PCM is 8 channels out @ 48 fixed (endpoints 0x01).
++	 * The output from computer to the mixer is usable.
++	 *
++	 * The input (phono or line to computer) is not working.
++	 * It should be at endpoint 0x82 and probably also 8 channels,
++	 * but it seems that it works only with Pioneer proprietary software.
++	 * Even on officially supported OS, the Audacity was unable to record
++	 * and Mixxx to recognize the control vinyls.
++	 */
++	USB_DEVICE_VENDOR_SPEC(0x2b73, 0x0017),
++	.driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++		.ifnum = QUIRK_ANY_INTERFACE,
++		.type = QUIRK_COMPOSITE,
++		.data = (const struct snd_usb_audio_quirk[]) {
++			{
++				.ifnum = 0,
++				.type = QUIRK_AUDIO_FIXED_ENDPOINT,
++				.data = &(const struct audioformat) {
++					.formats = SNDRV_PCM_FMTBIT_S24_3LE,
++					.channels = 8, // outputs
++					.iface = 0,
++					.altsetting = 1,
++					.altset_idx = 1,
++					.endpoint = 0x01,
++					.ep_attr = USB_ENDPOINT_XFER_ISOC|
++						USB_ENDPOINT_SYNC_ASYNC,
++					.rates = SNDRV_PCM_RATE_48000,
++					.rate_min = 48000,
++					.rate_max = 48000,
++					.nr_rates = 1,
++					.rate_table = (unsigned int[]) { 48000 }
++				}
++			},
++			{
++				.ifnum = -1
++			}
++		}
++	}
++},
++
++#define ALC1220_VB_DESKTOP(vend, prod) { \
++	USB_DEVICE(vend, prod),	\
++	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) { \
++		.vendor_name = "Realtek", \
++		.product_name = "ALC1220-VB-DT", \
++		.profile_name = "Realtek-ALC1220-VB-Desktop", \
++		.ifnum = QUIRK_NO_INTERFACE \
++	} \
++}
++ALC1220_VB_DESKTOP(0x0414, 0xa002), /* Gigabyte TRX40 Aorus Pro WiFi */
++ALC1220_VB_DESKTOP(0x0db0, 0x0d64), /* MSI TRX40 Creator */
++ALC1220_VB_DESKTOP(0x0db0, 0x543d), /* MSI TRX40 */
++#undef ALC1220_VB_DESKTOP
+ 
+ #undef USB_DEVICE_VENDOR_SPEC
+diff --git a/sound/usb/usx2y/usbusx2yaudio.c b/sound/usb/usx2y/usbusx2yaudio.c
+index 772f6f3ccbb1..00074af5873c 100644
+--- a/sound/usb/usx2y/usbusx2yaudio.c
++++ b/sound/usb/usx2y/usbusx2yaudio.c
+@@ -681,6 +681,8 @@ static int usX2Y_rate_set(struct usX2Ydev *usX2Y, int rate)
+ 			us->submitted =	2*NOOF_SETRATE_URBS;
+ 			for (i = 0; i < NOOF_SETRATE_URBS; ++i) {
+ 				struct urb *urb = us->urb[i];
++				if (!urb)
++					continue;
+ 				if (urb->status) {
+ 					if (!err)
+ 						err = -ENODEV;
+diff --git a/tools/lib/bpf/netlink.c b/tools/lib/bpf/netlink.c
+index 6d47345a310b..c364e4be5e6e 100644
+--- a/tools/lib/bpf/netlink.c
++++ b/tools/lib/bpf/netlink.c
+@@ -289,6 +289,8 @@ int bpf_get_link_xdp_info(int ifindex, struct xdp_link_info *info,
+ 
+ static __u32 get_xdp_id(struct xdp_link_info *info, __u32 flags)
+ {
++	flags &= XDP_FLAGS_MODES;
++
+ 	if (info->attach_mode != XDP_ATTACHED_MULTI && !flags)
+ 		return info->prog_id;
+ 	if (flags & XDP_FLAGS_DRV_MODE)
+diff --git a/tools/testing/nvdimm/Kbuild b/tools/testing/nvdimm/Kbuild
+index dbebf05f5931..47f9cc9dcd94 100644
+--- a/tools/testing/nvdimm/Kbuild
++++ b/tools/testing/nvdimm/Kbuild
+@@ -21,8 +21,8 @@ DRIVERS := ../../../drivers
+ NVDIMM_SRC := $(DRIVERS)/nvdimm
+ ACPI_SRC := $(DRIVERS)/acpi/nfit
+ DAX_SRC := $(DRIVERS)/dax
+-ccflags-y := -I$(src)/$(NVDIMM_SRC)/
+-ccflags-y += -I$(src)/$(ACPI_SRC)/
++ccflags-y := -I$(srctree)/drivers/nvdimm/
++ccflags-y += -I$(srctree)/drivers/acpi/nfit/
+ 
+ obj-$(CONFIG_LIBNVDIMM) += libnvdimm.o
+ obj-$(CONFIG_BLK_DEV_PMEM) += nd_pmem.o
+diff --git a/tools/testing/nvdimm/test/Kbuild b/tools/testing/nvdimm/test/Kbuild
+index fb3c3d7cdb9b..75baebf8f4ba 100644
+--- a/tools/testing/nvdimm/test/Kbuild
++++ b/tools/testing/nvdimm/test/Kbuild
+@@ -1,6 +1,6 @@
+ # SPDX-License-Identifier: GPL-2.0
+-ccflags-y := -I$(src)/../../../../drivers/nvdimm/
+-ccflags-y += -I$(src)/../../../../drivers/acpi/nfit/
++ccflags-y := -I$(srctree)/drivers/nvdimm/
++ccflags-y += -I$(srctree)/drivers/acpi/nfit/
+ 
+ obj-m += nfit_test.o
+ obj-m += nfit_test_iomap.o
+diff --git a/tools/testing/nvdimm/test/nfit.c b/tools/testing/nvdimm/test/nfit.c
+index bf6422a6af7f..a8ee5c4d41eb 100644
+--- a/tools/testing/nvdimm/test/nfit.c
++++ b/tools/testing/nvdimm/test/nfit.c
+@@ -3164,7 +3164,9 @@ static __init int nfit_test_init(void)
+ 	mcsafe_test();
+ 	dax_pmem_test();
+ 	dax_pmem_core_test();
++#ifdef CONFIG_DEV_DAX_PMEM_COMPAT
+ 	dax_pmem_compat_test();
++#endif
+ 
+ 	nfit_test_setup(nfit_test_lookup, nfit_test_evaluate_dsm);
+ 
+diff --git a/tools/testing/selftests/kmod/kmod.sh b/tools/testing/selftests/kmod/kmod.sh
+index 8b944cf042f6..315a43111e04 100755
+--- a/tools/testing/selftests/kmod/kmod.sh
++++ b/tools/testing/selftests/kmod/kmod.sh
+@@ -505,18 +505,23 @@ function test_num()
+ 	fi
+ }
+ 
+-function get_test_count()
++function get_test_data()
+ {
+ 	test_num $1
+-	TEST_DATA=$(echo $ALL_TESTS | awk '{print $'$1'}')
++	local field_num=$(echo $1 | sed 's/^0*//')
++	echo $ALL_TESTS | awk '{print $'$field_num'}'
++}
++
++function get_test_count()
++{
++	TEST_DATA=$(get_test_data $1)
+ 	LAST_TWO=${TEST_DATA#*:*}
+ 	echo ${LAST_TWO%:*}
+ }
+ 
+ function get_test_enabled()
+ {
+-	test_num $1
+-	TEST_DATA=$(echo $ALL_TESTS | awk '{print $'$1'}')
++	TEST_DATA=$(get_test_data $1)
+ 	echo ${TEST_DATA#*:*:}
+ }
+ 
+diff --git a/tools/testing/selftests/net/fib_nexthops.sh b/tools/testing/selftests/net/fib_nexthops.sh
+index 796670ebc65b..6560ed796ac4 100755
+--- a/tools/testing/selftests/net/fib_nexthops.sh
++++ b/tools/testing/selftests/net/fib_nexthops.sh
+@@ -749,6 +749,29 @@ ipv4_fcnal_runtime()
+ 	run_cmd "ip netns exec me ping -c1 -w1 172.16.101.1"
+ 	log_test $? 0 "Ping - multipath"
+ 
++	run_cmd "$IP ro delete 172.16.101.1/32 nhid 122"
++
++	#
++	# multiple default routes
++	# - tests fib_select_default
++	run_cmd "$IP nexthop add id 501 via 172.16.1.2 dev veth1"
++	run_cmd "$IP ro add default nhid 501"
++	run_cmd "$IP ro add default via 172.16.1.3 dev veth1 metric 20"
++	run_cmd "ip netns exec me ping -c1 -w1 172.16.101.1"
++	log_test $? 0 "Ping - multiple default routes, nh first"
++
++	# flip the order
++	run_cmd "$IP ro del default nhid 501"
++	run_cmd "$IP ro del default via 172.16.1.3 dev veth1 metric 20"
++	run_cmd "$IP ro add default via 172.16.1.2 dev veth1 metric 20"
++	run_cmd "$IP nexthop replace id 501 via 172.16.1.3 dev veth1"
++	run_cmd "$IP ro add default nhid 501 metric 20"
++	run_cmd "ip netns exec me ping -c1 -w1 172.16.101.1"
++	log_test $? 0 "Ping - multiple default routes, nh second"
++
++	run_cmd "$IP nexthop delete nhid 501"
++	run_cmd "$IP ro del default"
++
+ 	#
+ 	# IPv4 with blackhole nexthops
+ 	#
+diff --git a/tools/testing/selftests/net/fib_tests.sh b/tools/testing/selftests/net/fib_tests.sh
+index b7616704b55e..84205c3a55eb 100755
+--- a/tools/testing/selftests/net/fib_tests.sh
++++ b/tools/testing/selftests/net/fib_tests.sh
+@@ -618,16 +618,22 @@ fib_nexthop_test()
+ 
+ fib_suppress_test()
+ {
++	echo
++	echo "FIB rule with suppress_prefixlength"
++	setup
++
+ 	$IP link add dummy1 type dummy
+ 	$IP link set dummy1 up
+ 	$IP -6 route add default dev dummy1
+ 	$IP -6 rule add table main suppress_prefixlength 0
+-	ping -f -c 1000 -W 1 1234::1 || true
++	ping -f -c 1000 -W 1 1234::1 >/dev/null 2>&1
+ 	$IP -6 rule del table main suppress_prefixlength 0
+ 	$IP link del dummy1
+ 
+ 	# If we got here without crashing, we're good.
+-	return 0
++	log_test 0 0 "FIB rule suppress test"
++
++	cleanup
+ }
+ 
+ ################################################################################
+diff --git a/tools/vm/Makefile b/tools/vm/Makefile
+index 20f6cf04377f..9860622cbb15 100644
+--- a/tools/vm/Makefile
++++ b/tools/vm/Makefile
+@@ -1,6 +1,8 @@
+ # SPDX-License-Identifier: GPL-2.0
+ # Makefile for vm tools
+ #
++include ../scripts/Makefile.include
++
+ TARGETS=page-types slabinfo page_owner_sort
+ 
+ LIB_DIR = ../lib/api


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [gentoo-commits] proj/linux-patches:5.6 commit in: /
@ 2020-05-02 13:26 Mike Pagano
  0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2020-05-02 13:26 UTC (permalink / raw
  To: gentoo-commits

commit:     6325bd142c1dd00cc25073175adc64ba81e7a604
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat May  2 13:26:35 2020 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat May  2 13:26:35 2020 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=6325bd14

Linux patch 5.6.9

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README            |    4 +
 1008_linux-5.6.9.patch | 4807 ++++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 4811 insertions(+)

diff --git a/0000_README b/0000_README
index d756ad3..8794f80 100644
--- a/0000_README
+++ b/0000_README
@@ -75,6 +75,10 @@ Patch:  1007_linux-5.6.8.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.6.8
 
+Patch:  1008_linux-5.6.9.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.6.9
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1008_linux-5.6.9.patch b/1008_linux-5.6.9.patch
new file mode 100644
index 0000000..1ec7c3d
--- /dev/null
+++ b/1008_linux-5.6.9.patch
@@ -0,0 +1,4807 @@
+diff --git a/Makefile b/Makefile
+index e7101c99d81b..2fc8ba07d930 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 6
+-SUBLEVEL = 8
++SUBLEVEL = 9
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+ 
+diff --git a/arch/arm/boot/dts/bcm2835-rpi.dtsi b/arch/arm/boot/dts/bcm2835-rpi.dtsi
+index fd2c766e0f71..f7ae5a4530b8 100644
+--- a/arch/arm/boot/dts/bcm2835-rpi.dtsi
++++ b/arch/arm/boot/dts/bcm2835-rpi.dtsi
+@@ -14,6 +14,9 @@
+ 	soc {
+ 		firmware: firmware {
+ 			compatible = "raspberrypi,bcm2835-firmware", "simple-bus";
++			#address-cells = <1>;
++			#size-cells = <1>;
++
+ 			mboxes = <&mailbox>;
+ 			dma-ranges;
+ 		};
+diff --git a/arch/arm/boot/dts/bcm283x.dtsi b/arch/arm/boot/dts/bcm283x.dtsi
+index e1abe8c730ce..b83a864e2e8b 100644
+--- a/arch/arm/boot/dts/bcm283x.dtsi
++++ b/arch/arm/boot/dts/bcm283x.dtsi
+@@ -372,6 +372,7 @@
+ 					     "dsi0_ddr2",
+ 					     "dsi0_ddr";
+ 
++			status = "disabled";
+ 		};
+ 
+ 		aux: aux@7e215000 {
+diff --git a/arch/arm/boot/dts/omap3-n950-n9.dtsi b/arch/arm/boot/dts/omap3-n950-n9.dtsi
+index a075b63f3087..11d41e86f814 100644
+--- a/arch/arm/boot/dts/omap3-n950-n9.dtsi
++++ b/arch/arm/boot/dts/omap3-n950-n9.dtsi
+@@ -341,6 +341,11 @@
+ 	status = "disabled";
+ };
+ 
++/* RNG not directly accessible on N950/N9. */
++&rng_target {
++	status = "disabled";
++};
++
+ &usb_otg_hs {
+ 	interface-type = <0>;
+ 	usb-phy = <&usb2_phy>;
+diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
+index b91570ff9db1..931037500e83 100644
+--- a/arch/arm64/include/asm/sysreg.h
++++ b/arch/arm64/include/asm/sysreg.h
+@@ -49,7 +49,9 @@
+ #ifndef CONFIG_BROKEN_GAS_INST
+ 
+ #ifdef __ASSEMBLY__
+-#define __emit_inst(x)			.inst (x)
++// The space separator is omitted so that __emit_inst(x) can be parsed as
++// either an assembler directive or an assembler macro argument.
++#define __emit_inst(x)			.inst(x)
+ #else
+ #define __emit_inst(x)			".inst " __stringify((x)) "\n\t"
+ #endif
+diff --git a/arch/s390/kernel/diag.c b/arch/s390/kernel/diag.c
+index 61f2b0412345..ccba63aaeb47 100644
+--- a/arch/s390/kernel/diag.c
++++ b/arch/s390/kernel/diag.c
+@@ -133,7 +133,7 @@ void diag_stat_inc(enum diag_stat_enum nr)
+ }
+ EXPORT_SYMBOL(diag_stat_inc);
+ 
+-void diag_stat_inc_norecursion(enum diag_stat_enum nr)
++void notrace diag_stat_inc_norecursion(enum diag_stat_enum nr)
+ {
+ 	this_cpu_inc(diag_stat.counter[nr]);
+ 	trace_s390_diagnose_norecursion(diag_map[nr].code);
+diff --git a/arch/s390/kernel/smp.c b/arch/s390/kernel/smp.c
+index f87d4e14269c..4f8cb8d1c51b 100644
+--- a/arch/s390/kernel/smp.c
++++ b/arch/s390/kernel/smp.c
+@@ -403,7 +403,7 @@ int smp_find_processor_id(u16 address)
+ 	return -1;
+ }
+ 
+-bool arch_vcpu_is_preempted(int cpu)
++bool notrace arch_vcpu_is_preempted(int cpu)
+ {
+ 	if (test_cpu_flag_of(CIF_ENABLED_WAIT, cpu))
+ 		return false;
+@@ -413,7 +413,7 @@ bool arch_vcpu_is_preempted(int cpu)
+ }
+ EXPORT_SYMBOL(arch_vcpu_is_preempted);
+ 
+-void smp_yield_cpu(int cpu)
++void notrace smp_yield_cpu(int cpu)
+ {
+ 	if (!MACHINE_HAS_DIAG9C)
+ 		return;
+diff --git a/arch/s390/kernel/trace.c b/arch/s390/kernel/trace.c
+index 490b52e85014..11a669f3cc93 100644
+--- a/arch/s390/kernel/trace.c
++++ b/arch/s390/kernel/trace.c
+@@ -14,7 +14,7 @@ EXPORT_TRACEPOINT_SYMBOL(s390_diagnose);
+ 
+ static DEFINE_PER_CPU(unsigned int, diagnose_trace_depth);
+ 
+-void trace_s390_diagnose_norecursion(int diag_nr)
++void notrace trace_s390_diagnose_norecursion(int diag_nr)
+ {
+ 	unsigned long flags;
+ 	unsigned int *depth;
+diff --git a/arch/s390/pci/pci_irq.c b/arch/s390/pci/pci_irq.c
+index fbe97ab2e228..743f257cf2cb 100644
+--- a/arch/s390/pci/pci_irq.c
++++ b/arch/s390/pci/pci_irq.c
+@@ -115,7 +115,6 @@ static struct irq_chip zpci_irq_chip = {
+ 	.name = "PCI-MSI",
+ 	.irq_unmask = pci_msi_unmask_irq,
+ 	.irq_mask = pci_msi_mask_irq,
+-	.irq_set_affinity = zpci_set_irq_affinity,
+ };
+ 
+ static void zpci_handle_cpu_local_irq(bool rescan)
+@@ -276,7 +275,9 @@ int arch_setup_msi_irqs(struct pci_dev *pdev, int nvec, int type)
+ 		rc = -EIO;
+ 		if (hwirq - bit >= msi_vecs)
+ 			break;
+-		irq = __irq_alloc_descs(-1, 0, 1, 0, THIS_MODULE, msi->affinity);
++		irq = __irq_alloc_descs(-1, 0, 1, 0, THIS_MODULE,
++				(irq_delivery == DIRECTED) ?
++				msi->affinity : NULL);
+ 		if (irq < 0)
+ 			return -ENOMEM;
+ 		rc = irq_set_msi_desc(irq, msi);
+diff --git a/arch/um/Makefile b/arch/um/Makefile
+index d2daa206872d..275f5ffdf6f0 100644
+--- a/arch/um/Makefile
++++ b/arch/um/Makefile
+@@ -140,6 +140,7 @@ export CFLAGS_vmlinux := $(LINK-y) $(LINK_WRAPS) $(LD_FLAGS_CMDLINE)
+ # When cleaning we don't include .config, so we don't include
+ # TT or skas makefiles and don't clean skas_ptregs.h.
+ CLEAN_FILES += linux x.i gmon.out
++MRPROPER_DIRS += arch/$(SUBARCH)/include/generated
+ 
+ archclean:
+ 	@find . \( -name '*.bb' -o -name '*.bbg' -o -name '*.da' \
+diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c
+index 5e296a7e6036..ebf34c7bc8bc 100644
+--- a/arch/x86/kernel/cpu/mshyperv.c
++++ b/arch/x86/kernel/cpu/mshyperv.c
+@@ -227,8 +227,8 @@ static void __init ms_hyperv_init_platform(void)
+ 	ms_hyperv.misc_features = cpuid_edx(HYPERV_CPUID_FEATURES);
+ 	ms_hyperv.hints    = cpuid_eax(HYPERV_CPUID_ENLIGHTMENT_INFO);
+ 
+-	pr_info("Hyper-V: features 0x%x, hints 0x%x\n",
+-		ms_hyperv.features, ms_hyperv.hints);
++	pr_info("Hyper-V: features 0x%x, hints 0x%x, misc 0x%x\n",
++		ms_hyperv.features, ms_hyperv.hints, ms_hyperv.misc_features);
+ 
+ 	ms_hyperv.max_vp_index = cpuid_eax(HYPERV_CPUID_IMPLEMENT_LIMITS);
+ 	ms_hyperv.max_lp_index = cpuid_ebx(HYPERV_CPUID_IMPLEMENT_LIMITS);
+diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
+index 9ba08e9abc09..6aa53c33b471 100644
+--- a/arch/x86/net/bpf_jit_comp.c
++++ b/arch/x86/net/bpf_jit_comp.c
+@@ -158,6 +158,19 @@ static bool is_ereg(u32 reg)
+ 			     BIT(BPF_REG_AX));
+ }
+ 
++/*
++ * is_ereg_8l() == true if BPF register 'reg' is mapped to access x86-64
++ * lower 8-bit registers dil,sil,bpl,spl,r8b..r15b, which need extra byte
++ * of encoding. al,cl,dl,bl have simpler encoding.
++ */
++static bool is_ereg_8l(u32 reg)
++{
++	return is_ereg(reg) ||
++	    (1 << reg) & (BIT(BPF_REG_1) |
++			  BIT(BPF_REG_2) |
++			  BIT(BPF_REG_FP));
++}
++
+ static bool is_axreg(u32 reg)
+ {
+ 	return reg == BPF_REG_0;
+@@ -598,9 +611,8 @@ static void emit_stx(u8 **pprog, u32 size, u32 dst_reg, u32 src_reg, int off)
+ 	switch (size) {
+ 	case BPF_B:
+ 		/* Emit 'mov byte ptr [rax + off], al' */
+-		if (is_ereg(dst_reg) || is_ereg(src_reg) ||
+-		    /* We have to add extra byte for x86 SIL, DIL regs */
+-		    src_reg == BPF_REG_1 || src_reg == BPF_REG_2)
++		if (is_ereg(dst_reg) || is_ereg_8l(src_reg))
++			/* Add extra byte for eregs or SIL,DIL,BPL in src_reg */
+ 			EMIT2(add_2mod(0x40, dst_reg, src_reg), 0x88);
+ 		else
+ 			EMIT1(0x88);
+diff --git a/arch/x86/net/bpf_jit_comp32.c b/arch/x86/net/bpf_jit_comp32.c
+index 4d2a7a764602..66cd150b7e54 100644
+--- a/arch/x86/net/bpf_jit_comp32.c
++++ b/arch/x86/net/bpf_jit_comp32.c
+@@ -1847,14 +1847,16 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
+ 			case BPF_B:
+ 			case BPF_H:
+ 			case BPF_W:
+-				if (!bpf_prog->aux->verifier_zext)
++				if (bpf_prog->aux->verifier_zext)
+ 					break;
+ 				if (dstk) {
+ 					EMIT3(0xC7, add_1reg(0x40, IA32_EBP),
+ 					      STACK_VAR(dst_hi));
+ 					EMIT(0x0, 4);
+ 				} else {
+-					EMIT3(0xC7, add_1reg(0xC0, dst_hi), 0);
++					/* xor dst_hi,dst_hi */
++					EMIT2(0x33,
++					      add_2reg(0xC0, dst_hi, dst_hi));
+ 				}
+ 				break;
+ 			case BPF_DW:
+@@ -2013,8 +2015,8 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
+ 		case BPF_JMP | BPF_JSET | BPF_X:
+ 		case BPF_JMP32 | BPF_JSET | BPF_X: {
+ 			bool is_jmp64 = BPF_CLASS(insn->code) == BPF_JMP;
+-			u8 dreg_lo = dstk ? IA32_EAX : dst_lo;
+-			u8 dreg_hi = dstk ? IA32_EDX : dst_hi;
++			u8 dreg_lo = IA32_EAX;
++			u8 dreg_hi = IA32_EDX;
+ 			u8 sreg_lo = sstk ? IA32_ECX : src_lo;
+ 			u8 sreg_hi = sstk ? IA32_EBX : src_hi;
+ 
+@@ -2026,6 +2028,13 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
+ 					      add_2reg(0x40, IA32_EBP,
+ 						       IA32_EDX),
+ 					      STACK_VAR(dst_hi));
++			} else {
++				/* mov dreg_lo,dst_lo */
++				EMIT2(0x89, add_2reg(0xC0, dreg_lo, dst_lo));
++				if (is_jmp64)
++					/* mov dreg_hi,dst_hi */
++					EMIT2(0x89,
++					      add_2reg(0xC0, dreg_hi, dst_hi));
+ 			}
+ 
+ 			if (sstk) {
+@@ -2050,8 +2059,8 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
+ 		case BPF_JMP | BPF_JSET | BPF_K:
+ 		case BPF_JMP32 | BPF_JSET | BPF_K: {
+ 			bool is_jmp64 = BPF_CLASS(insn->code) == BPF_JMP;
+-			u8 dreg_lo = dstk ? IA32_EAX : dst_lo;
+-			u8 dreg_hi = dstk ? IA32_EDX : dst_hi;
++			u8 dreg_lo = IA32_EAX;
++			u8 dreg_hi = IA32_EDX;
+ 			u8 sreg_lo = IA32_ECX;
+ 			u8 sreg_hi = IA32_EBX;
+ 			u32 hi;
+@@ -2064,6 +2073,13 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
+ 					      add_2reg(0x40, IA32_EBP,
+ 						       IA32_EDX),
+ 					      STACK_VAR(dst_hi));
++			} else {
++				/* mov dreg_lo,dst_lo */
++				EMIT2(0x89, add_2reg(0xC0, dreg_lo, dst_lo));
++				if (is_jmp64)
++					/* mov dreg_hi,dst_hi */
++					EMIT2(0x89,
++					      add_2reg(0xC0, dreg_hi, dst_hi));
+ 			}
+ 
+ 			/* mov ecx,imm32 */
+diff --git a/arch/x86/platform/efi/efi_64.c b/arch/x86/platform/efi/efi_64.c
+index a47294063882..a20914b38e6a 100644
+--- a/arch/x86/platform/efi/efi_64.c
++++ b/arch/x86/platform/efi/efi_64.c
+@@ -202,7 +202,7 @@ virt_to_phys_or_null_size(void *va, unsigned long size)
+ 
+ int __init efi_setup_page_tables(unsigned long pa_memmap, unsigned num_pages)
+ {
+-	unsigned long pfn, text, pf;
++	unsigned long pfn, text, pf, rodata;
+ 	struct page *page;
+ 	unsigned npages;
+ 	pgd_t *pgd = efi_mm.pgd;
+@@ -256,7 +256,7 @@ int __init efi_setup_page_tables(unsigned long pa_memmap, unsigned num_pages)
+ 
+ 	efi_scratch.phys_stack = page_to_phys(page + 1); /* stack grows down */
+ 
+-	npages = (__end_rodata_aligned - _text) >> PAGE_SHIFT;
++	npages = (_etext - _text) >> PAGE_SHIFT;
+ 	text = __pa(_text);
+ 	pfn = text >> PAGE_SHIFT;
+ 
+@@ -266,6 +266,14 @@ int __init efi_setup_page_tables(unsigned long pa_memmap, unsigned num_pages)
+ 		return 1;
+ 	}
+ 
++	npages = (__end_rodata - __start_rodata) >> PAGE_SHIFT;
++	rodata = __pa(__start_rodata);
++	pfn = rodata >> PAGE_SHIFT;
++	if (kernel_map_pages_in_pgd(pgd, pfn, rodata, npages, pf)) {
++		pr_err("Failed to map kernel rodata 1:1\n");
++		return 1;
++	}
++
+ 	return 0;
+ }
+ 
+diff --git a/block/blk-iocost.c b/block/blk-iocost.c
+index 9a599cc28c29..2dc5dc54e257 100644
+--- a/block/blk-iocost.c
++++ b/block/blk-iocost.c
+@@ -1594,7 +1594,7 @@ skip_surplus_transfers:
+ 				      vrate_min, vrate_max);
+ 		}
+ 
+-		trace_iocost_ioc_vrate_adj(ioc, vrate, &missed_ppm, rq_wait_pct,
++		trace_iocost_ioc_vrate_adj(ioc, vrate, missed_ppm, rq_wait_pct,
+ 					   nr_lagging, nr_shortages,
+ 					   nr_surpluses);
+ 
+@@ -1603,7 +1603,7 @@ skip_surplus_transfers:
+ 			ioc->period_us * vrate * INUSE_MARGIN_PCT, 100);
+ 	} else if (ioc->busy_level != prev_busy_level || nr_lagging) {
+ 		trace_iocost_ioc_vrate_adj(ioc, atomic64_read(&ioc->vtime_rate),
+-					   &missed_ppm, rq_wait_pct, nr_lagging,
++					   missed_ppm, rq_wait_pct, nr_lagging,
+ 					   nr_shortages, nr_surpluses);
+ 	}
+ 
+diff --git a/block/blk-mq.c b/block/blk-mq.c
+index 37ff8dfb8ab9..2c3a1b2e0753 100644
+--- a/block/blk-mq.c
++++ b/block/blk-mq.c
+@@ -1205,8 +1205,10 @@ bool blk_mq_dispatch_rq_list(struct request_queue *q, struct list_head *list,
+ 		rq = list_first_entry(list, struct request, queuelist);
+ 
+ 		hctx = rq->mq_hctx;
+-		if (!got_budget && !blk_mq_get_dispatch_budget(hctx))
++		if (!got_budget && !blk_mq_get_dispatch_budget(hctx)) {
++			blk_mq_put_driver_tag(rq);
+ 			break;
++		}
+ 
+ 		if (!blk_mq_get_driver_tag(rq)) {
+ 			/*
+diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
+index 0e99a760aebd..8646147dc194 100644
+--- a/drivers/base/power/main.c
++++ b/drivers/base/power/main.c
+@@ -726,7 +726,7 @@ static bool dpm_async_fn(struct device *dev, async_func_t func)
+ 
+ 	if (is_async(dev)) {
+ 		get_device(dev);
+-		async_schedule(func, dev);
++		async_schedule_dev(func, dev);
+ 		return true;
+ 	}
+ 
+diff --git a/drivers/clk/clk-asm9260.c b/drivers/clk/clk-asm9260.c
+index 536b59aabd2c..bacebd457e6f 100644
+--- a/drivers/clk/clk-asm9260.c
++++ b/drivers/clk/clk-asm9260.c
+@@ -276,7 +276,7 @@ static void __init asm9260_acc_init(struct device_node *np)
+ 
+ 	/* TODO: Convert to DT parent scheme */
+ 	ref_clk = of_clk_get_parent_name(np, 0);
+-	hw = __clk_hw_register_fixed_rate_with_accuracy(NULL, NULL, pll_clk,
++	hw = __clk_hw_register_fixed_rate(NULL, NULL, pll_clk,
+ 			ref_clk, NULL, NULL, 0, rate, 0,
+ 			CLK_FIXED_RATE_PARENT_ACCURACY);
+ 
+diff --git a/drivers/counter/104-quad-8.c b/drivers/counter/104-quad-8.c
+index 17e67a84777d..dd0a57f80988 100644
+--- a/drivers/counter/104-quad-8.c
++++ b/drivers/counter/104-quad-8.c
+@@ -42,6 +42,7 @@ MODULE_PARM_DESC(base, "ACCES 104-QUAD-8 base addresses");
+  * @base:		base port address of the IIO device
+  */
+ struct quad8_iio {
++	struct mutex lock;
+ 	struct counter_device counter;
+ 	unsigned int preset[QUAD8_NUM_COUNTERS];
+ 	unsigned int count_mode[QUAD8_NUM_COUNTERS];
+@@ -116,6 +117,8 @@ static int quad8_read_raw(struct iio_dev *indio_dev,
+ 		/* Borrow XOR Carry effectively doubles count range */
+ 		*val = (borrow ^ carry) << 24;
+ 
++		mutex_lock(&priv->lock);
++
+ 		/* Reset Byte Pointer; transfer Counter to Output Latch */
+ 		outb(QUAD8_CTR_RLD | QUAD8_RLD_RESET_BP | QUAD8_RLD_CNTR_OUT,
+ 		     base_offset + 1);
+@@ -123,6 +126,8 @@ static int quad8_read_raw(struct iio_dev *indio_dev,
+ 		for (i = 0; i < 3; i++)
+ 			*val |= (unsigned int)inb(base_offset) << (8 * i);
+ 
++		mutex_unlock(&priv->lock);
++
+ 		return IIO_VAL_INT;
+ 	case IIO_CHAN_INFO_ENABLE:
+ 		*val = priv->ab_enable[chan->channel];
+@@ -153,6 +158,8 @@ static int quad8_write_raw(struct iio_dev *indio_dev,
+ 		if ((unsigned int)val > 0xFFFFFF)
+ 			return -EINVAL;
+ 
++		mutex_lock(&priv->lock);
++
+ 		/* Reset Byte Pointer */
+ 		outb(QUAD8_CTR_RLD | QUAD8_RLD_RESET_BP, base_offset + 1);
+ 
+@@ -176,12 +183,16 @@ static int quad8_write_raw(struct iio_dev *indio_dev,
+ 		/* Reset Error flag */
+ 		outb(QUAD8_CTR_RLD | QUAD8_RLD_RESET_E, base_offset + 1);
+ 
++		mutex_unlock(&priv->lock);
++
+ 		return 0;
+ 	case IIO_CHAN_INFO_ENABLE:
+ 		/* only boolean values accepted */
+ 		if (val < 0 || val > 1)
+ 			return -EINVAL;
+ 
++		mutex_lock(&priv->lock);
++
+ 		priv->ab_enable[chan->channel] = val;
+ 
+ 		ior_cfg = val | priv->preset_enable[chan->channel] << 1;
+@@ -189,11 +200,18 @@ static int quad8_write_raw(struct iio_dev *indio_dev,
+ 		/* Load I/O control configuration */
+ 		outb(QUAD8_CTR_IOR | ior_cfg, base_offset + 1);
+ 
++		mutex_unlock(&priv->lock);
++
+ 		return 0;
+ 	case IIO_CHAN_INFO_SCALE:
++		mutex_lock(&priv->lock);
++
+ 		/* Quadrature scaling only available in quadrature mode */
+-		if (!priv->quadrature_mode[chan->channel] && (val2 || val != 1))
++		if (!priv->quadrature_mode[chan->channel] &&
++				(val2 || val != 1)) {
++			mutex_unlock(&priv->lock);
+ 			return -EINVAL;
++		}
+ 
+ 		/* Only three gain states (1, 0.5, 0.25) */
+ 		if (val == 1 && !val2)
+@@ -207,11 +225,15 @@ static int quad8_write_raw(struct iio_dev *indio_dev,
+ 				priv->quadrature_scale[chan->channel] = 2;
+ 				break;
+ 			default:
++				mutex_unlock(&priv->lock);
+ 				return -EINVAL;
+ 			}
+-		else
++		else {
++			mutex_unlock(&priv->lock);
+ 			return -EINVAL;
++		}
+ 
++		mutex_unlock(&priv->lock);
+ 		return 0;
+ 	}
+ 
+@@ -248,6 +270,8 @@ static ssize_t quad8_write_preset(struct iio_dev *indio_dev, uintptr_t private,
+ 	if (preset > 0xFFFFFF)
+ 		return -EINVAL;
+ 
++	mutex_lock(&priv->lock);
++
+ 	priv->preset[chan->channel] = preset;
+ 
+ 	/* Reset Byte Pointer */
+@@ -257,6 +281,8 @@ static ssize_t quad8_write_preset(struct iio_dev *indio_dev, uintptr_t private,
+ 	for (i = 0; i < 3; i++)
+ 		outb(preset >> (8 * i), base_offset);
+ 
++	mutex_unlock(&priv->lock);
++
+ 	return len;
+ }
+ 
+@@ -286,6 +312,8 @@ static ssize_t quad8_write_set_to_preset_on_index(struct iio_dev *indio_dev,
+ 	/* Preset enable is active low in Input/Output Control register */
+ 	preset_enable = !preset_enable;
+ 
++	mutex_lock(&priv->lock);
++
+ 	priv->preset_enable[chan->channel] = preset_enable;
+ 
+ 	ior_cfg = priv->ab_enable[chan->channel] |
+@@ -294,6 +322,8 @@ static ssize_t quad8_write_set_to_preset_on_index(struct iio_dev *indio_dev,
+ 	/* Load I/O control configuration to Input / Output Control Register */
+ 	outb(QUAD8_CTR_IOR | ior_cfg, base_offset);
+ 
++	mutex_unlock(&priv->lock);
++
+ 	return len;
+ }
+ 
+@@ -351,6 +381,8 @@ static int quad8_set_count_mode(struct iio_dev *indio_dev,
+ 	unsigned int mode_cfg = cnt_mode << 1;
+ 	const int base_offset = priv->base + 2 * chan->channel + 1;
+ 
++	mutex_lock(&priv->lock);
++
+ 	priv->count_mode[chan->channel] = cnt_mode;
+ 
+ 	/* Add quadrature mode configuration */
+@@ -360,6 +392,8 @@ static int quad8_set_count_mode(struct iio_dev *indio_dev,
+ 	/* Load mode configuration to Counter Mode Register */
+ 	outb(QUAD8_CTR_CMR | mode_cfg, base_offset);
+ 
++	mutex_unlock(&priv->lock);
++
+ 	return 0;
+ }
+ 
+@@ -387,19 +421,26 @@ static int quad8_set_synchronous_mode(struct iio_dev *indio_dev,
+ 	const struct iio_chan_spec *chan, unsigned int synchronous_mode)
+ {
+ 	struct quad8_iio *const priv = iio_priv(indio_dev);
+-	const unsigned int idr_cfg = synchronous_mode |
+-		priv->index_polarity[chan->channel] << 1;
+ 	const int base_offset = priv->base + 2 * chan->channel + 1;
++	unsigned int idr_cfg = synchronous_mode;
++
++	mutex_lock(&priv->lock);
++
++	idr_cfg |= priv->index_polarity[chan->channel] << 1;
+ 
+ 	/* Index function must be non-synchronous in non-quadrature mode */
+-	if (synchronous_mode && !priv->quadrature_mode[chan->channel])
++	if (synchronous_mode && !priv->quadrature_mode[chan->channel]) {
++		mutex_unlock(&priv->lock);
+ 		return -EINVAL;
++	}
+ 
+ 	priv->synchronous_mode[chan->channel] = synchronous_mode;
+ 
+ 	/* Load Index Control configuration to Index Control Register */
+ 	outb(QUAD8_CTR_IDR | idr_cfg, base_offset);
+ 
++	mutex_unlock(&priv->lock);
++
+ 	return 0;
+ }
+ 
+@@ -427,8 +468,12 @@ static int quad8_set_quadrature_mode(struct iio_dev *indio_dev,
+ 	const struct iio_chan_spec *chan, unsigned int quadrature_mode)
+ {
+ 	struct quad8_iio *const priv = iio_priv(indio_dev);
+-	unsigned int mode_cfg = priv->count_mode[chan->channel] << 1;
+ 	const int base_offset = priv->base + 2 * chan->channel + 1;
++	unsigned int mode_cfg;
++
++	mutex_lock(&priv->lock);
++
++	mode_cfg = priv->count_mode[chan->channel] << 1;
+ 
+ 	if (quadrature_mode)
+ 		mode_cfg |= (priv->quadrature_scale[chan->channel] + 1) << 3;
+@@ -446,6 +491,8 @@ static int quad8_set_quadrature_mode(struct iio_dev *indio_dev,
+ 	/* Load mode configuration to Counter Mode Register */
+ 	outb(QUAD8_CTR_CMR | mode_cfg, base_offset);
+ 
++	mutex_unlock(&priv->lock);
++
+ 	return 0;
+ }
+ 
+@@ -473,15 +520,20 @@ static int quad8_set_index_polarity(struct iio_dev *indio_dev,
+ 	const struct iio_chan_spec *chan, unsigned int index_polarity)
+ {
+ 	struct quad8_iio *const priv = iio_priv(indio_dev);
+-	const unsigned int idr_cfg = priv->synchronous_mode[chan->channel] |
+-		index_polarity << 1;
+ 	const int base_offset = priv->base + 2 * chan->channel + 1;
++	unsigned int idr_cfg = index_polarity << 1;
++
++	mutex_lock(&priv->lock);
++
++	idr_cfg |= priv->synchronous_mode[chan->channel];
+ 
+ 	priv->index_polarity[chan->channel] = index_polarity;
+ 
+ 	/* Load Index Control configuration to Index Control Register */
+ 	outb(QUAD8_CTR_IDR | idr_cfg, base_offset);
+ 
++	mutex_unlock(&priv->lock);
++
+ 	return 0;
+ }
+ 
+@@ -582,7 +634,7 @@ static int quad8_signal_read(struct counter_device *counter,
+ static int quad8_count_read(struct counter_device *counter,
+ 	struct counter_count *count, unsigned long *val)
+ {
+-	const struct quad8_iio *const priv = counter->priv;
++	struct quad8_iio *const priv = counter->priv;
+ 	const int base_offset = priv->base + 2 * count->id;
+ 	unsigned int flags;
+ 	unsigned int borrow;
+@@ -596,6 +648,8 @@ static int quad8_count_read(struct counter_device *counter,
+ 	/* Borrow XOR Carry effectively doubles count range */
+ 	*val = (unsigned long)(borrow ^ carry) << 24;
+ 
++	mutex_lock(&priv->lock);
++
+ 	/* Reset Byte Pointer; transfer Counter to Output Latch */
+ 	outb(QUAD8_CTR_RLD | QUAD8_RLD_RESET_BP | QUAD8_RLD_CNTR_OUT,
+ 	     base_offset + 1);
+@@ -603,13 +657,15 @@ static int quad8_count_read(struct counter_device *counter,
+ 	for (i = 0; i < 3; i++)
+ 		*val |= (unsigned long)inb(base_offset) << (8 * i);
+ 
++	mutex_unlock(&priv->lock);
++
+ 	return 0;
+ }
+ 
+ static int quad8_count_write(struct counter_device *counter,
+ 	struct counter_count *count, unsigned long val)
+ {
+-	const struct quad8_iio *const priv = counter->priv;
++	struct quad8_iio *const priv = counter->priv;
+ 	const int base_offset = priv->base + 2 * count->id;
+ 	int i;
+ 
+@@ -617,6 +673,8 @@ static int quad8_count_write(struct counter_device *counter,
+ 	if (val > 0xFFFFFF)
+ 		return -EINVAL;
+ 
++	mutex_lock(&priv->lock);
++
+ 	/* Reset Byte Pointer */
+ 	outb(QUAD8_CTR_RLD | QUAD8_RLD_RESET_BP, base_offset + 1);
+ 
+@@ -640,6 +698,8 @@ static int quad8_count_write(struct counter_device *counter,
+ 	/* Reset Error flag */
+ 	outb(QUAD8_CTR_RLD | QUAD8_RLD_RESET_E, base_offset + 1);
+ 
++	mutex_unlock(&priv->lock);
++
+ 	return 0;
+ }
+ 
+@@ -660,13 +720,13 @@ static enum counter_count_function quad8_count_functions_list[] = {
+ static int quad8_function_get(struct counter_device *counter,
+ 	struct counter_count *count, size_t *function)
+ {
+-	const struct quad8_iio *const priv = counter->priv;
++	struct quad8_iio *const priv = counter->priv;
+ 	const int id = count->id;
+-	const unsigned int quadrature_mode = priv->quadrature_mode[id];
+-	const unsigned int scale = priv->quadrature_scale[id];
+ 
+-	if (quadrature_mode)
+-		switch (scale) {
++	mutex_lock(&priv->lock);
++
++	if (priv->quadrature_mode[id])
++		switch (priv->quadrature_scale[id]) {
+ 		case 0:
+ 			*function = QUAD8_COUNT_FUNCTION_QUADRATURE_X1;
+ 			break;
+@@ -680,6 +740,8 @@ static int quad8_function_get(struct counter_device *counter,
+ 	else
+ 		*function = QUAD8_COUNT_FUNCTION_PULSE_DIRECTION;
+ 
++	mutex_unlock(&priv->lock);
++
+ 	return 0;
+ }
+ 
+@@ -690,10 +752,15 @@ static int quad8_function_set(struct counter_device *counter,
+ 	const int id = count->id;
+ 	unsigned int *const quadrature_mode = priv->quadrature_mode + id;
+ 	unsigned int *const scale = priv->quadrature_scale + id;
+-	unsigned int mode_cfg = priv->count_mode[id] << 1;
+ 	unsigned int *const synchronous_mode = priv->synchronous_mode + id;
+-	const unsigned int idr_cfg = priv->index_polarity[id] << 1;
+ 	const int base_offset = priv->base + 2 * id + 1;
++	unsigned int mode_cfg;
++	unsigned int idr_cfg;
++
++	mutex_lock(&priv->lock);
++
++	mode_cfg = priv->count_mode[id] << 1;
++	idr_cfg = priv->index_polarity[id] << 1;
+ 
+ 	if (function == QUAD8_COUNT_FUNCTION_PULSE_DIRECTION) {
+ 		*quadrature_mode = 0;
+@@ -729,6 +796,8 @@ static int quad8_function_set(struct counter_device *counter,
+ 	/* Load mode configuration to Counter Mode Register */
+ 	outb(QUAD8_CTR_CMR | mode_cfg, base_offset);
+ 
++	mutex_unlock(&priv->lock);
++
+ 	return 0;
+ }
+ 
+@@ -845,15 +914,20 @@ static int quad8_index_polarity_set(struct counter_device *counter,
+ {
+ 	struct quad8_iio *const priv = counter->priv;
+ 	const size_t channel_id = signal->id - 16;
+-	const unsigned int idr_cfg = priv->synchronous_mode[channel_id] |
+-		index_polarity << 1;
+ 	const int base_offset = priv->base + 2 * channel_id + 1;
++	unsigned int idr_cfg = index_polarity << 1;
++
++	mutex_lock(&priv->lock);
++
++	idr_cfg |= priv->synchronous_mode[channel_id];
+ 
+ 	priv->index_polarity[channel_id] = index_polarity;
+ 
+ 	/* Load Index Control configuration to Index Control Register */
+ 	outb(QUAD8_CTR_IDR | idr_cfg, base_offset);
+ 
++	mutex_unlock(&priv->lock);
++
+ 	return 0;
+ }
+ 
+@@ -880,19 +954,26 @@ static int quad8_synchronous_mode_set(struct counter_device *counter,
+ {
+ 	struct quad8_iio *const priv = counter->priv;
+ 	const size_t channel_id = signal->id - 16;
+-	const unsigned int idr_cfg = synchronous_mode |
+-		priv->index_polarity[channel_id] << 1;
+ 	const int base_offset = priv->base + 2 * channel_id + 1;
++	unsigned int idr_cfg = synchronous_mode;
++
++	mutex_lock(&priv->lock);
++
++	idr_cfg |= priv->index_polarity[channel_id] << 1;
+ 
+ 	/* Index function must be non-synchronous in non-quadrature mode */
+-	if (synchronous_mode && !priv->quadrature_mode[channel_id])
++	if (synchronous_mode && !priv->quadrature_mode[channel_id]) {
++		mutex_unlock(&priv->lock);
+ 		return -EINVAL;
++	}
+ 
+ 	priv->synchronous_mode[channel_id] = synchronous_mode;
+ 
+ 	/* Load Index Control configuration to Index Control Register */
+ 	outb(QUAD8_CTR_IDR | idr_cfg, base_offset);
+ 
++	mutex_unlock(&priv->lock);
++
+ 	return 0;
+ }
+ 
+@@ -957,6 +1038,8 @@ static int quad8_count_mode_set(struct counter_device *counter,
+ 		break;
+ 	}
+ 
++	mutex_lock(&priv->lock);
++
+ 	priv->count_mode[count->id] = cnt_mode;
+ 
+ 	/* Set count mode configuration value */
+@@ -969,6 +1052,8 @@ static int quad8_count_mode_set(struct counter_device *counter,
+ 	/* Load mode configuration to Counter Mode Register */
+ 	outb(QUAD8_CTR_CMR | mode_cfg, base_offset);
+ 
++	mutex_unlock(&priv->lock);
++
+ 	return 0;
+ }
+ 
+@@ -1010,6 +1095,8 @@ static ssize_t quad8_count_enable_write(struct counter_device *counter,
+ 	if (err)
+ 		return err;
+ 
++	mutex_lock(&priv->lock);
++
+ 	priv->ab_enable[count->id] = ab_enable;
+ 
+ 	ior_cfg = ab_enable | priv->preset_enable[count->id] << 1;
+@@ -1017,6 +1104,8 @@ static ssize_t quad8_count_enable_write(struct counter_device *counter,
+ 	/* Load I/O control configuration */
+ 	outb(QUAD8_CTR_IOR | ior_cfg, base_offset + 1);
+ 
++	mutex_unlock(&priv->lock);
++
+ 	return len;
+ }
+ 
+@@ -1045,14 +1134,28 @@ static ssize_t quad8_count_preset_read(struct counter_device *counter,
+ 	return sprintf(buf, "%u\n", priv->preset[count->id]);
+ }
+ 
++static void quad8_preset_register_set(struct quad8_iio *quad8iio, int id,
++		unsigned int preset)
++{
++	const unsigned int base_offset = quad8iio->base + 2 * id;
++	int i;
++
++	quad8iio->preset[id] = preset;
++
++	/* Reset Byte Pointer */
++	outb(QUAD8_CTR_RLD | QUAD8_RLD_RESET_BP, base_offset + 1);
++
++	/* Set Preset Register */
++	for (i = 0; i < 3; i++)
++		outb(preset >> (8 * i), base_offset);
++}
++
+ static ssize_t quad8_count_preset_write(struct counter_device *counter,
+ 	struct counter_count *count, void *private, const char *buf, size_t len)
+ {
+ 	struct quad8_iio *const priv = counter->priv;
+-	const int base_offset = priv->base + 2 * count->id;
+ 	unsigned int preset;
+ 	int ret;
+-	int i;
+ 
+ 	ret = kstrtouint(buf, 0, &preset);
+ 	if (ret)
+@@ -1062,14 +1165,11 @@ static ssize_t quad8_count_preset_write(struct counter_device *counter,
+ 	if (preset > 0xFFFFFF)
+ 		return -EINVAL;
+ 
+-	priv->preset[count->id] = preset;
++	mutex_lock(&priv->lock);
+ 
+-	/* Reset Byte Pointer */
+-	outb(QUAD8_CTR_RLD | QUAD8_RLD_RESET_BP, base_offset + 1);
++	quad8_preset_register_set(priv, count->id, preset);
+ 
+-	/* Set Preset Register */
+-	for (i = 0; i < 3; i++)
+-		outb(preset >> (8 * i), base_offset);
++	mutex_unlock(&priv->lock);
+ 
+ 	return len;
+ }
+@@ -1077,15 +1177,20 @@ static ssize_t quad8_count_preset_write(struct counter_device *counter,
+ static ssize_t quad8_count_ceiling_read(struct counter_device *counter,
+ 	struct counter_count *count, void *private, char *buf)
+ {
+-	const struct quad8_iio *const priv = counter->priv;
++	struct quad8_iio *const priv = counter->priv;
++
++	mutex_lock(&priv->lock);
+ 
+ 	/* Range Limit and Modulo-N count modes use preset value as ceiling */
+ 	switch (priv->count_mode[count->id]) {
+ 	case 1:
+ 	case 3:
+-		return quad8_count_preset_read(counter, count, private, buf);
++		mutex_unlock(&priv->lock);
++		return sprintf(buf, "%u\n", priv->preset[count->id]);
+ 	}
+ 
++	mutex_unlock(&priv->lock);
++
+ 	/* By default 0x1FFFFFF (25 bits unsigned) is maximum count */
+ 	return sprintf(buf, "33554431\n");
+ }
+@@ -1094,15 +1199,29 @@ static ssize_t quad8_count_ceiling_write(struct counter_device *counter,
+ 	struct counter_count *count, void *private, const char *buf, size_t len)
+ {
+ 	struct quad8_iio *const priv = counter->priv;
++	unsigned int ceiling;
++	int ret;
++
++	ret = kstrtouint(buf, 0, &ceiling);
++	if (ret)
++		return ret;
++
++	/* Only 24-bit values are supported */
++	if (ceiling > 0xFFFFFF)
++		return -EINVAL;
++
++	mutex_lock(&priv->lock);
+ 
+ 	/* Range Limit and Modulo-N count modes use preset value as ceiling */
+ 	switch (priv->count_mode[count->id]) {
+ 	case 1:
+ 	case 3:
+-		return quad8_count_preset_write(counter, count, private, buf,
+-						len);
++		quad8_preset_register_set(priv, count->id, ceiling);
++		break;
+ 	}
+ 
++	mutex_unlock(&priv->lock);
++
+ 	return len;
+ }
+ 
+@@ -1130,6 +1249,8 @@ static ssize_t quad8_count_preset_enable_write(struct counter_device *counter,
+ 	/* Preset enable is active low in Input/Output Control register */
+ 	preset_enable = !preset_enable;
+ 
++	mutex_lock(&priv->lock);
++
+ 	priv->preset_enable[count->id] = preset_enable;
+ 
+ 	ior_cfg = priv->ab_enable[count->id] | (unsigned int)preset_enable << 1;
+@@ -1137,6 +1258,8 @@ static ssize_t quad8_count_preset_enable_write(struct counter_device *counter,
+ 	/* Load I/O control configuration to Input / Output Control Register */
+ 	outb(QUAD8_CTR_IOR | ior_cfg, base_offset);
+ 
++	mutex_unlock(&priv->lock);
++
+ 	return len;
+ }
+ 
+@@ -1307,6 +1430,9 @@ static int quad8_probe(struct device *dev, unsigned int id)
+ 	quad8iio->counter.priv = quad8iio;
+ 	quad8iio->base = base[id];
+ 
++	/* Initialize mutex */
++	mutex_init(&quad8iio->lock);
++
+ 	/* Reset all counters and disable interrupt function */
+ 	outb(QUAD8_CHAN_OP_RESET_COUNTERS, base[id] + QUAD8_REG_CHAN_OP);
+ 	/* Set initial configuration for all counters */
+diff --git a/drivers/crypto/chelsio/chcr_core.c b/drivers/crypto/chelsio/chcr_core.c
+index e937605670ac..8c2e85f884d3 100644
+--- a/drivers/crypto/chelsio/chcr_core.c
++++ b/drivers/crypto/chelsio/chcr_core.c
+@@ -125,8 +125,6 @@ static void chcr_dev_init(struct uld_ctx *u_ctx)
+ 	atomic_set(&dev->inflight, 0);
+ 	mutex_lock(&drv_data.drv_mutex);
+ 	list_add_tail(&u_ctx->entry, &drv_data.inact_dev);
+-	if (!drv_data.last_dev)
+-		drv_data.last_dev = u_ctx;
+ 	mutex_unlock(&drv_data.drv_mutex);
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index c8bf9cb3cebf..f184cdca938d 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -1953,8 +1953,24 @@ static void amdgpu_device_fill_reset_magic(struct amdgpu_device *adev)
+  */
+ static bool amdgpu_device_check_vram_lost(struct amdgpu_device *adev)
+ {
+-	return !!memcmp(adev->gart.ptr, adev->reset_magic,
+-			AMDGPU_RESET_MAGIC_NUM);
++	if (memcmp(adev->gart.ptr, adev->reset_magic,
++			AMDGPU_RESET_MAGIC_NUM))
++		return true;
++
++	if (!adev->in_gpu_reset)
++		return false;
++
++	/*
++	 * For all ASICs with baco/mode1 reset, the VRAM is
++	 * always assumed to be lost.
++	 */
++	switch (amdgpu_asic_reset_method(adev)) {
++	case AMD_RESET_METHOD_BACO:
++	case AMD_RESET_METHOD_MODE1:
++		return true;
++	default:
++		return false;
++	}
+ }
+ 
+ /**
+diff --git a/drivers/gpu/drm/amd/amdgpu/cik.c b/drivers/gpu/drm/amd/amdgpu/cik.c
+index 006f21ef7ddf..62635e58e45e 100644
+--- a/drivers/gpu/drm/amd/amdgpu/cik.c
++++ b/drivers/gpu/drm/amd/amdgpu/cik.c
+@@ -1358,8 +1358,6 @@ static int cik_asic_reset(struct amdgpu_device *adev)
+ 	int r;
+ 
+ 	if (cik_asic_reset_method(adev) == AMD_RESET_METHOD_BACO) {
+-		if (!adev->in_suspend)
+-			amdgpu_inc_vram_lost(adev);
+ 		r = amdgpu_dpm_baco_reset(adev);
+ 	} else {
+ 		r = cik_asic_pci_config_reset(adev);
+diff --git a/drivers/gpu/drm/amd/amdgpu/nv.c b/drivers/gpu/drm/amd/amdgpu/nv.c
+index 2d1bebdf1603..cc3a79029376 100644
+--- a/drivers/gpu/drm/amd/amdgpu/nv.c
++++ b/drivers/gpu/drm/amd/amdgpu/nv.c
+@@ -351,8 +351,6 @@ static int nv_asic_reset(struct amdgpu_device *adev)
+ 	struct smu_context *smu = &adev->smu;
+ 
+ 	if (nv_asic_reset_method(adev) == AMD_RESET_METHOD_BACO) {
+-		if (!adev->in_suspend)
+-			amdgpu_inc_vram_lost(adev);
+ 		ret = smu_baco_enter(smu);
+ 		if (ret)
+ 			return ret;
+@@ -360,8 +358,6 @@ static int nv_asic_reset(struct amdgpu_device *adev)
+ 		if (ret)
+ 			return ret;
+ 	} else {
+-		if (!adev->in_suspend)
+-			amdgpu_inc_vram_lost(adev);
+ 		ret = nv_asic_mode1_reset(adev);
+ 	}
+ 
+diff --git a/drivers/gpu/drm/amd/amdgpu/soc15.c b/drivers/gpu/drm/amd/amdgpu/soc15.c
+index d8945c31b622..132a67a041a2 100644
+--- a/drivers/gpu/drm/amd/amdgpu/soc15.c
++++ b/drivers/gpu/drm/amd/amdgpu/soc15.c
+@@ -569,14 +569,10 @@ static int soc15_asic_reset(struct amdgpu_device *adev)
+ 
+ 	switch (soc15_asic_reset_method(adev)) {
+ 		case AMD_RESET_METHOD_BACO:
+-			if (!adev->in_suspend)
+-				amdgpu_inc_vram_lost(adev);
+ 			return soc15_asic_baco_reset(adev);
+ 		case AMD_RESET_METHOD_MODE2:
+ 			return amdgpu_dpm_mode2_reset(adev);
+ 		default:
+-			if (!adev->in_suspend)
+-				amdgpu_inc_vram_lost(adev);
+ 			return soc15_asic_mode1_reset(adev);
+ 	}
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/vi.c b/drivers/gpu/drm/amd/amdgpu/vi.c
+index 78b35901643b..3ce10e05d0d6 100644
+--- a/drivers/gpu/drm/amd/amdgpu/vi.c
++++ b/drivers/gpu/drm/amd/amdgpu/vi.c
+@@ -765,8 +765,6 @@ static int vi_asic_reset(struct amdgpu_device *adev)
+ 	int r;
+ 
+ 	if (vi_asic_reset_method(adev) == AMD_RESET_METHOD_BACO) {
+-		if (!adev->in_suspend)
+-			amdgpu_inc_vram_lost(adev);
+ 		r = amdgpu_dpm_baco_reset(adev);
+ 	} else {
+ 		r = vi_asic_pci_config_reset(adev);
+diff --git a/drivers/hwmon/Kconfig b/drivers/hwmon/Kconfig
+index 47ac20aee06f..4c1c61aa4b82 100644
+--- a/drivers/hwmon/Kconfig
++++ b/drivers/hwmon/Kconfig
+@@ -403,7 +403,7 @@ config SENSORS_DRIVETEMP
+ 	  hard disk drives.
+ 
+ 	  This driver can also be built as a module. If so, the module
+-	  will be called satatemp.
++	  will be called drivetemp.
+ 
+ config SENSORS_DS620
+ 	tristate "Dallas Semiconductor DS620"
+diff --git a/drivers/hwmon/drivetemp.c b/drivers/hwmon/drivetemp.c
+index 370d0c74eb01..9179460c2d9d 100644
+--- a/drivers/hwmon/drivetemp.c
++++ b/drivers/hwmon/drivetemp.c
+@@ -264,12 +264,18 @@ static int drivetemp_get_scttemp(struct drivetemp_data *st, u32 attr, long *val)
+ 		return err;
+ 	switch (attr) {
+ 	case hwmon_temp_input:
++		if (!temp_is_valid(buf[SCT_STATUS_TEMP]))
++			return -ENODATA;
+ 		*val = temp_from_sct(buf[SCT_STATUS_TEMP]);
+ 		break;
+ 	case hwmon_temp_lowest:
++		if (!temp_is_valid(buf[SCT_STATUS_TEMP_LOWEST]))
++			return -ENODATA;
+ 		*val = temp_from_sct(buf[SCT_STATUS_TEMP_LOWEST]);
+ 		break;
+ 	case hwmon_temp_highest:
++		if (!temp_is_valid(buf[SCT_STATUS_TEMP_HIGHEST]))
++			return -ENODATA;
+ 		*val = temp_from_sct(buf[SCT_STATUS_TEMP_HIGHEST]);
+ 		break;
+ 	default:
+diff --git a/drivers/hwmon/jc42.c b/drivers/hwmon/jc42.c
+index f2d81b0558e5..e3f1ebee7130 100644
+--- a/drivers/hwmon/jc42.c
++++ b/drivers/hwmon/jc42.c
+@@ -506,7 +506,7 @@ static int jc42_probe(struct i2c_client *client, const struct i2c_device_id *id)
+ 	}
+ 	data->config = config;
+ 
+-	hwmon_dev = devm_hwmon_device_register_with_info(dev, client->name,
++	hwmon_dev = devm_hwmon_device_register_with_info(dev, "jc42",
+ 							 data, &jc42_chip_info,
+ 							 NULL);
+ 	return PTR_ERR_OR_ZERO(hwmon_dev);
+diff --git a/drivers/i2c/busses/i2c-altera.c b/drivers/i2c/busses/i2c-altera.c
+index 1de23b4f3809..92d2c706c2a7 100644
+--- a/drivers/i2c/busses/i2c-altera.c
++++ b/drivers/i2c/busses/i2c-altera.c
+@@ -384,7 +384,6 @@ static int altr_i2c_probe(struct platform_device *pdev)
+ 	struct altr_i2c_dev *idev = NULL;
+ 	struct resource *res;
+ 	int irq, ret;
+-	u32 val;
+ 
+ 	idev = devm_kzalloc(&pdev->dev, sizeof(*idev), GFP_KERNEL);
+ 	if (!idev)
+@@ -411,17 +410,17 @@ static int altr_i2c_probe(struct platform_device *pdev)
+ 	init_completion(&idev->msg_complete);
+ 	spin_lock_init(&idev->lock);
+ 
+-	val = device_property_read_u32(idev->dev, "fifo-size",
++	ret = device_property_read_u32(idev->dev, "fifo-size",
+ 				       &idev->fifo_size);
+-	if (val) {
++	if (ret) {
+ 		dev_err(&pdev->dev, "FIFO size set to default of %d\n",
+ 			ALTR_I2C_DFLT_FIFO_SZ);
+ 		idev->fifo_size = ALTR_I2C_DFLT_FIFO_SZ;
+ 	}
+ 
+-	val = device_property_read_u32(idev->dev, "clock-frequency",
++	ret = device_property_read_u32(idev->dev, "clock-frequency",
+ 				       &idev->bus_clk_rate);
+-	if (val) {
++	if (ret) {
+ 		dev_err(&pdev->dev, "Default to 100kHz\n");
+ 		idev->bus_clk_rate = 100000;	/* default clock rate */
+ 	}
+diff --git a/drivers/iio/adc/ad7793.c b/drivers/iio/adc/ad7793.c
+index b747db97f78a..e5691e330323 100644
+--- a/drivers/iio/adc/ad7793.c
++++ b/drivers/iio/adc/ad7793.c
+@@ -542,7 +542,7 @@ static const struct iio_info ad7797_info = {
+ 	.read_raw = &ad7793_read_raw,
+ 	.write_raw = &ad7793_write_raw,
+ 	.write_raw_get_fmt = &ad7793_write_raw_get_fmt,
+-	.attrs = &ad7793_attribute_group,
++	.attrs = &ad7797_attribute_group,
+ 	.validate_trigger = ad_sd_validate_trigger,
+ };
+ 
+diff --git a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx.h b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx.h
+index 9c3486a8134f..84b27b624149 100644
+--- a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx.h
++++ b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx.h
+@@ -337,6 +337,7 @@ enum st_lsm6dsx_fifo_mode {
+  * @gain: Configured sensor sensitivity.
+  * @odr: Output data rate of the sensor [Hz].
+  * @watermark: Sensor watermark level.
++ * @decimator: Sensor decimation factor.
+  * @sip: Number of samples in a given pattern.
+  * @ts_ref: Sensor timestamp reference for hw one.
+  * @ext_info: Sensor settings if it is connected to i2c controller
+@@ -350,11 +351,13 @@ struct st_lsm6dsx_sensor {
+ 	u32 odr;
+ 
+ 	u16 watermark;
++	u8 decimator;
+ 	u8 sip;
+ 	s64 ts_ref;
+ 
+ 	struct {
+ 		const struct st_lsm6dsx_ext_dev_settings *settings;
++		u32 slv_odr;
+ 		u8 addr;
+ 	} ext_info;
+ };
+diff --git a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c
+index bb899345f2bb..afd00daeefb2 100644
+--- a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c
++++ b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_buffer.c
+@@ -93,6 +93,7 @@ st_lsm6dsx_get_decimator_val(struct st_lsm6dsx_sensor *sensor, u32 max_odr)
+ 			break;
+ 	}
+ 
++	sensor->decimator = decimator;
+ 	return i == max_size ? 0 : st_lsm6dsx_decimator_table[i].val;
+ }
+ 
+@@ -337,7 +338,7 @@ static inline int st_lsm6dsx_read_block(struct st_lsm6dsx_hw *hw, u8 addr,
+ int st_lsm6dsx_read_fifo(struct st_lsm6dsx_hw *hw)
+ {
+ 	struct st_lsm6dsx_sensor *acc_sensor, *gyro_sensor, *ext_sensor = NULL;
+-	int err, acc_sip, gyro_sip, ts_sip, ext_sip, read_len, offset;
++	int err, sip, acc_sip, gyro_sip, ts_sip, ext_sip, read_len, offset;
+ 	u16 fifo_len, pattern_len = hw->sip * ST_LSM6DSX_SAMPLE_SIZE;
+ 	u16 fifo_diff_mask = hw->settings->fifo_ops.fifo_diff.mask;
+ 	u8 gyro_buff[ST_LSM6DSX_IIO_BUFF_SIZE];
+@@ -399,19 +400,20 @@ int st_lsm6dsx_read_fifo(struct st_lsm6dsx_hw *hw)
+ 		acc_sip = acc_sensor->sip;
+ 		ts_sip = hw->ts_sip;
+ 		offset = 0;
++		sip = 0;
+ 
+ 		while (acc_sip > 0 || gyro_sip > 0 || ext_sip > 0) {
+-			if (gyro_sip > 0) {
++			if (gyro_sip > 0 && !(sip % gyro_sensor->decimator)) {
+ 				memcpy(gyro_buff, &hw->buff[offset],
+ 				       ST_LSM6DSX_SAMPLE_SIZE);
+ 				offset += ST_LSM6DSX_SAMPLE_SIZE;
+ 			}
+-			if (acc_sip > 0) {
++			if (acc_sip > 0 && !(sip % acc_sensor->decimator)) {
+ 				memcpy(acc_buff, &hw->buff[offset],
+ 				       ST_LSM6DSX_SAMPLE_SIZE);
+ 				offset += ST_LSM6DSX_SAMPLE_SIZE;
+ 			}
+-			if (ext_sip > 0) {
++			if (ext_sip > 0 && !(sip % ext_sensor->decimator)) {
+ 				memcpy(ext_buff, &hw->buff[offset],
+ 				       ST_LSM6DSX_SAMPLE_SIZE);
+ 				offset += ST_LSM6DSX_SAMPLE_SIZE;
+@@ -441,18 +443,25 @@ int st_lsm6dsx_read_fifo(struct st_lsm6dsx_hw *hw)
+ 				offset += ST_LSM6DSX_SAMPLE_SIZE;
+ 			}
+ 
+-			if (gyro_sip-- > 0)
++			if (gyro_sip > 0 && !(sip % gyro_sensor->decimator)) {
+ 				iio_push_to_buffers_with_timestamp(
+ 					hw->iio_devs[ST_LSM6DSX_ID_GYRO],
+ 					gyro_buff, gyro_sensor->ts_ref + ts);
+-			if (acc_sip-- > 0)
++				gyro_sip--;
++			}
++			if (acc_sip > 0 && !(sip % acc_sensor->decimator)) {
+ 				iio_push_to_buffers_with_timestamp(
+ 					hw->iio_devs[ST_LSM6DSX_ID_ACC],
+ 					acc_buff, acc_sensor->ts_ref + ts);
+-			if (ext_sip-- > 0)
++				acc_sip--;
++			}
++			if (ext_sip > 0 && !(sip % ext_sensor->decimator)) {
+ 				iio_push_to_buffers_with_timestamp(
+ 					hw->iio_devs[ST_LSM6DSX_ID_EXT0],
+ 					ext_buff, ext_sensor->ts_ref + ts);
++				ext_sip--;
++			}
++			sip++;
+ 		}
+ 	}
+ 
+diff --git a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_shub.c b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_shub.c
+index 95ddd19d1aa7..64ef07a30726 100644
+--- a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_shub.c
++++ b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_shub.c
+@@ -421,7 +421,8 @@ int st_lsm6dsx_shub_set_enable(struct st_lsm6dsx_sensor *sensor, bool enable)
+ 
+ 	settings = sensor->ext_info.settings;
+ 	if (enable) {
+-		err = st_lsm6dsx_shub_set_odr(sensor, sensor->odr);
++		err = st_lsm6dsx_shub_set_odr(sensor,
++					      sensor->ext_info.slv_odr);
+ 		if (err < 0)
+ 			return err;
+ 	} else {
+@@ -459,7 +460,7 @@ st_lsm6dsx_shub_read_oneshot(struct st_lsm6dsx_sensor *sensor,
+ 	if (err < 0)
+ 		return err;
+ 
+-	delay = 1000000000 / sensor->odr;
++	delay = 1000000000 / sensor->ext_info.slv_odr;
+ 	usleep_range(delay, 2 * delay);
+ 
+ 	len = min_t(int, sizeof(data), ch->scan_type.realbits >> 3);
+@@ -500,8 +501,8 @@ st_lsm6dsx_shub_read_raw(struct iio_dev *iio_dev,
+ 		iio_device_release_direct_mode(iio_dev);
+ 		break;
+ 	case IIO_CHAN_INFO_SAMP_FREQ:
+-		*val = sensor->odr / 1000;
+-		*val2 = (sensor->odr % 1000) * 1000;
++		*val = sensor->ext_info.slv_odr / 1000;
++		*val2 = (sensor->ext_info.slv_odr % 1000) * 1000;
+ 		ret = IIO_VAL_INT_PLUS_MICRO;
+ 		break;
+ 	case IIO_CHAN_INFO_SCALE:
+@@ -535,8 +536,20 @@ st_lsm6dsx_shub_write_raw(struct iio_dev *iio_dev,
+ 
+ 		val = val * 1000 + val2 / 1000;
+ 		err = st_lsm6dsx_shub_get_odr_val(sensor, val, &data);
+-		if (!err)
+-			sensor->odr = val;
++		if (!err) {
++			struct st_lsm6dsx_hw *hw = sensor->hw;
++			struct st_lsm6dsx_sensor *ref_sensor;
++			u8 odr_val;
++			int odr;
++
++			ref_sensor = iio_priv(hw->iio_devs[ST_LSM6DSX_ID_ACC]);
++			odr = st_lsm6dsx_check_odr(ref_sensor, val, &odr_val);
++			if (odr < 0)
++				return odr;
++
++			sensor->ext_info.slv_odr = val;
++			sensor->odr = odr;
++		}
+ 		break;
+ 	}
+ 	default:
+@@ -613,6 +626,7 @@ st_lsm6dsx_shub_alloc_iiodev(struct st_lsm6dsx_hw *hw,
+ 			     const struct st_lsm6dsx_ext_dev_settings *info,
+ 			     u8 i2c_addr, const char *name)
+ {
++	enum st_lsm6dsx_sensor_id ref_id = ST_LSM6DSX_ID_ACC;
+ 	struct iio_chan_spec *ext_channels;
+ 	struct st_lsm6dsx_sensor *sensor;
+ 	struct iio_dev *iio_dev;
+@@ -628,7 +642,8 @@ st_lsm6dsx_shub_alloc_iiodev(struct st_lsm6dsx_hw *hw,
+ 	sensor = iio_priv(iio_dev);
+ 	sensor->id = id;
+ 	sensor->hw = hw;
+-	sensor->odr = info->odr_table.odr_avl[0].milli_hz;
++	sensor->odr = hw->settings->odr_table[ref_id].odr_avl[0].milli_hz;
++	sensor->ext_info.slv_odr = info->odr_table.odr_avl[0].milli_hz;
+ 	sensor->gain = info->fs_table.fs_avl[0].gain;
+ 	sensor->ext_info.settings = info;
+ 	sensor->ext_info.addr = i2c_addr;
+diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
+index 7c8f65c9c32d..381513e05302 100644
+--- a/drivers/irqchip/irq-gic-v3-its.c
++++ b/drivers/irqchip/irq-gic-v3-its.c
+@@ -14,6 +14,7 @@
+ #include <linux/dma-iommu.h>
+ #include <linux/efi.h>
+ #include <linux/interrupt.h>
++#include <linux/iopoll.h>
+ #include <linux/irqdomain.h>
+ #include <linux/list.h>
+ #include <linux/log2.h>
+@@ -3516,6 +3517,20 @@ out:
+ 	return IRQ_SET_MASK_OK_DONE;
+ }
+ 
++static void its_wait_vpt_parse_complete(void)
++{
++	void __iomem *vlpi_base = gic_data_rdist_vlpi_base();
++	u64 val;
++
++	if (!gic_rdists->has_vpend_valid_dirty)
++		return;
++
++	WARN_ON_ONCE(readq_relaxed_poll_timeout(vlpi_base + GICR_VPENDBASER,
++						val,
++						!(val & GICR_VPENDBASER_Dirty),
++						10, 500));
++}
++
+ static void its_vpe_schedule(struct its_vpe *vpe)
+ {
+ 	void __iomem *vlpi_base = gic_data_rdist_vlpi_base();
+@@ -3546,6 +3561,8 @@ static void its_vpe_schedule(struct its_vpe *vpe)
+ 	val |= vpe->idai ? GICR_VPENDBASER_IDAI : 0;
+ 	val |= GICR_VPENDBASER_Valid;
+ 	gicr_write_vpendbaser(val, vlpi_base + GICR_VPENDBASER);
++
++	its_wait_vpt_parse_complete();
+ }
+ 
+ static void its_vpe_deschedule(struct its_vpe *vpe)
+@@ -3752,6 +3769,8 @@ static void its_vpe_4_1_schedule(struct its_vpe *vpe,
+ 	val |= FIELD_PREP(GICR_VPENDBASER_4_1_VPEID, vpe->vpe_id);
+ 
+ 	gicr_write_vpendbaser(val, vlpi_base + GICR_VPENDBASER);
++
++	its_wait_vpt_parse_complete();
+ }
+ 
+ static void its_vpe_4_1_deschedule(struct its_vpe *vpe,
+diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
+index 1eec9d4649d5..71a84f9c5696 100644
+--- a/drivers/irqchip/irq-gic-v3.c
++++ b/drivers/irqchip/irq-gic-v3.c
+@@ -866,6 +866,7 @@ static int __gic_update_rdist_properties(struct redist_region *region,
+ 	gic_data.rdists.has_rvpeid &= !!(typer & GICR_TYPER_RVPEID);
+ 	gic_data.rdists.has_direct_lpi &= (!!(typer & GICR_TYPER_DirectLPIS) |
+ 					   gic_data.rdists.has_rvpeid);
++	gic_data.rdists.has_vpend_valid_dirty &= !!(typer & GICR_TYPER_DIRTY);
+ 
+ 	/* Detect non-sensical configurations */
+ 	if (WARN_ON_ONCE(gic_data.rdists.has_rvpeid && !gic_data.rdists.has_vlpis)) {
+@@ -886,10 +887,11 @@ static void gic_update_rdist_properties(void)
+ 	if (WARN_ON(gic_data.ppi_nr == UINT_MAX))
+ 		gic_data.ppi_nr = 0;
+ 	pr_info("%d PPIs implemented\n", gic_data.ppi_nr);
+-	pr_info("%sVLPI support, %sdirect LPI support, %sRVPEID support\n",
+-		!gic_data.rdists.has_vlpis ? "no " : "",
+-		!gic_data.rdists.has_direct_lpi ? "no " : "",
+-		!gic_data.rdists.has_rvpeid ? "no " : "");
++	if (gic_data.rdists.has_vlpis)
++		pr_info("GICv4 features: %s%s%s\n",
++			gic_data.rdists.has_direct_lpi ? "DirectLPI " : "",
++			gic_data.rdists.has_rvpeid ? "RVPEID " : "",
++			gic_data.rdists.has_vpend_valid_dirty ? "Valid+Dirty " : "");
+ }
+ 
+ /* Check whether it's single security state view */
+@@ -1614,6 +1616,7 @@ static int __init gic_init_bases(void __iomem *dist_base,
+ 	gic_data.rdists.has_rvpeid = true;
+ 	gic_data.rdists.has_vlpis = true;
+ 	gic_data.rdists.has_direct_lpi = true;
++	gic_data.rdists.has_vpend_valid_dirty = true;
+ 
+ 	if (WARN_ON(!gic_data.domain) || WARN_ON(!gic_data.rdists.rdist)) {
+ 		err = -ENOMEM;
+diff --git a/drivers/irqchip/irq-meson-gpio.c b/drivers/irqchip/irq-meson-gpio.c
+index ccc7f823911b..bc7aebcc96e9 100644
+--- a/drivers/irqchip/irq-meson-gpio.c
++++ b/drivers/irqchip/irq-meson-gpio.c
+@@ -144,12 +144,17 @@ struct meson_gpio_irq_controller {
+ static void meson_gpio_irq_update_bits(struct meson_gpio_irq_controller *ctl,
+ 				       unsigned int reg, u32 mask, u32 val)
+ {
++	unsigned long flags;
+ 	u32 tmp;
+ 
++	spin_lock_irqsave(&ctl->lock, flags);
++
+ 	tmp = readl_relaxed(ctl->base + reg);
+ 	tmp &= ~mask;
+ 	tmp |= val;
+ 	writel_relaxed(tmp, ctl->base + reg);
++
++	spin_unlock_irqrestore(&ctl->lock, flags);
+ }
+ 
+ static void meson_gpio_irq_init_dummy(struct meson_gpio_irq_controller *ctl)
+@@ -196,14 +201,15 @@ meson_gpio_irq_request_channel(struct meson_gpio_irq_controller *ctl,
+ 			       unsigned long  hwirq,
+ 			       u32 **channel_hwirq)
+ {
++	unsigned long flags;
+ 	unsigned int idx;
+ 
+-	spin_lock(&ctl->lock);
++	spin_lock_irqsave(&ctl->lock, flags);
+ 
+ 	/* Find a free channel */
+ 	idx = find_first_zero_bit(ctl->channel_map, NUM_CHANNEL);
+ 	if (idx >= NUM_CHANNEL) {
+-		spin_unlock(&ctl->lock);
++		spin_unlock_irqrestore(&ctl->lock, flags);
+ 		pr_err("No channel available\n");
+ 		return -ENOSPC;
+ 	}
+@@ -211,6 +217,8 @@ meson_gpio_irq_request_channel(struct meson_gpio_irq_controller *ctl,
+ 	/* Mark the channel as used */
+ 	set_bit(idx, ctl->channel_map);
+ 
++	spin_unlock_irqrestore(&ctl->lock, flags);
++
+ 	/*
+ 	 * Setup the mux of the channel to route the signal of the pad
+ 	 * to the appropriate input of the GIC
+@@ -225,8 +233,6 @@ meson_gpio_irq_request_channel(struct meson_gpio_irq_controller *ctl,
+ 	 */
+ 	*channel_hwirq = &(ctl->channel_irqs[idx]);
+ 
+-	spin_unlock(&ctl->lock);
+-
+ 	pr_debug("hwirq %lu assigned to channel %d - irq %u\n",
+ 		 hwirq, idx, **channel_hwirq);
+ 
+@@ -287,13 +293,9 @@ static int meson_gpio_irq_type_setup(struct meson_gpio_irq_controller *ctl,
+ 			val |= REG_EDGE_POL_LOW(params, idx);
+ 	}
+ 
+-	spin_lock(&ctl->lock);
+-
+ 	meson_gpio_irq_update_bits(ctl, REG_EDGE_POL,
+ 				   REG_EDGE_POL_MASK(params, idx), val);
+ 
+-	spin_unlock(&ctl->lock);
+-
+ 	return 0;
+ }
+ 
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c b/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
+index 844fdcf55118..2d4ed751333f 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
+@@ -3748,7 +3748,7 @@ int t4_phy_fw_ver(struct adapter *adap, int *phy_fw_ver)
+ 		 FW_PARAMS_PARAM_Z_V(FW_PARAMS_PARAM_DEV_PHYFW_VERSION));
+ 	ret = t4_query_params(adap, adap->mbox, adap->pf, 0, 1,
+ 			      &param, &val);
+-	if (ret < 0)
++	if (ret)
+ 		return ret;
+ 	*phy_fw_ver = val;
+ 	return 0;
+diff --git a/drivers/net/ethernet/freescale/fec.h b/drivers/net/ethernet/freescale/fec.h
+index f79e57f735b3..d89568f810bc 100644
+--- a/drivers/net/ethernet/freescale/fec.h
++++ b/drivers/net/ethernet/freescale/fec.h
+@@ -488,6 +488,12 @@ struct fec_enet_priv_rx_q {
+ 	struct  sk_buff *rx_skbuff[RX_RING_SIZE];
+ };
+ 
++struct fec_stop_mode_gpr {
++	struct regmap *gpr;
++	u8 reg;
++	u8 bit;
++};
++
+ /* The FEC buffer descriptors track the ring buffers.  The rx_bd_base and
+  * tx_bd_base always point to the base of the buffer descriptors.  The
+  * cur_rx and cur_tx point to the currently available buffer.
+@@ -562,6 +568,7 @@ struct fec_enet_private {
+ 	int hwts_tx_en;
+ 	struct delayed_work time_keep;
+ 	struct regulator *reg_phy;
++	struct fec_stop_mode_gpr stop_gpr;
+ 
+ 	unsigned int tx_align;
+ 	unsigned int rx_align;
+diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
+index 23c5fef2f1ad..869efbb6c4d0 100644
+--- a/drivers/net/ethernet/freescale/fec_main.c
++++ b/drivers/net/ethernet/freescale/fec_main.c
+@@ -62,6 +62,8 @@
+ #include <linux/if_vlan.h>
+ #include <linux/pinctrl/consumer.h>
+ #include <linux/prefetch.h>
++#include <linux/mfd/syscon.h>
++#include <linux/regmap.h>
+ #include <soc/imx/cpuidle.h>
+ 
+ #include <asm/cacheflush.h>
+@@ -84,6 +86,56 @@ static void fec_enet_itr_coal_init(struct net_device *ndev);
+ #define FEC_ENET_OPD_V	0xFFF0
+ #define FEC_MDIO_PM_TIMEOUT  100 /* ms */
+ 
++struct fec_devinfo {
++	u32 quirks;
++	u8 stop_gpr_reg;
++	u8 stop_gpr_bit;
++};
++
++static const struct fec_devinfo fec_imx25_info = {
++	.quirks = FEC_QUIRK_USE_GASKET | FEC_QUIRK_MIB_CLEAR |
++		  FEC_QUIRK_HAS_FRREG,
++};
++
++static const struct fec_devinfo fec_imx27_info = {
++	.quirks = FEC_QUIRK_MIB_CLEAR | FEC_QUIRK_HAS_FRREG,
++};
++
++static const struct fec_devinfo fec_imx28_info = {
++	.quirks = FEC_QUIRK_ENET_MAC | FEC_QUIRK_SWAP_FRAME |
++		  FEC_QUIRK_SINGLE_MDIO | FEC_QUIRK_HAS_RACC |
++		  FEC_QUIRK_HAS_FRREG,
++};
++
++static const struct fec_devinfo fec_imx6q_info = {
++	.quirks = FEC_QUIRK_ENET_MAC | FEC_QUIRK_HAS_GBIT |
++		  FEC_QUIRK_HAS_BUFDESC_EX | FEC_QUIRK_HAS_CSUM |
++		  FEC_QUIRK_HAS_VLAN | FEC_QUIRK_ERR006358 |
++		  FEC_QUIRK_HAS_RACC,
++	.stop_gpr_reg = 0x34,
++	.stop_gpr_bit = 27,
++};
++
++static const struct fec_devinfo fec_mvf600_info = {
++	.quirks = FEC_QUIRK_ENET_MAC | FEC_QUIRK_HAS_RACC,
++};
++
++static const struct fec_devinfo fec_imx6x_info = {
++	.quirks = FEC_QUIRK_ENET_MAC | FEC_QUIRK_HAS_GBIT |
++		  FEC_QUIRK_HAS_BUFDESC_EX | FEC_QUIRK_HAS_CSUM |
++		  FEC_QUIRK_HAS_VLAN | FEC_QUIRK_HAS_AVB |
++		  FEC_QUIRK_ERR007885 | FEC_QUIRK_BUG_CAPTURE |
++		  FEC_QUIRK_HAS_RACC | FEC_QUIRK_HAS_COALESCE,
++};
++
++static const struct fec_devinfo fec_imx6ul_info = {
++	.quirks = FEC_QUIRK_ENET_MAC | FEC_QUIRK_HAS_GBIT |
++		  FEC_QUIRK_HAS_BUFDESC_EX | FEC_QUIRK_HAS_CSUM |
++		  FEC_QUIRK_HAS_VLAN | FEC_QUIRK_ERR007885 |
++		  FEC_QUIRK_BUG_CAPTURE | FEC_QUIRK_HAS_RACC |
++		  FEC_QUIRK_HAS_COALESCE,
++};
++
+ static struct platform_device_id fec_devtype[] = {
+ 	{
+ 		/* keep it for coldfire */
+@@ -91,39 +143,25 @@ static struct platform_device_id fec_devtype[] = {
+ 		.driver_data = 0,
+ 	}, {
+ 		.name = "imx25-fec",
+-		.driver_data = FEC_QUIRK_USE_GASKET | FEC_QUIRK_MIB_CLEAR |
+-			       FEC_QUIRK_HAS_FRREG,
++		.driver_data = (kernel_ulong_t)&fec_imx25_info,
+ 	}, {
+ 		.name = "imx27-fec",
+-		.driver_data = FEC_QUIRK_MIB_CLEAR | FEC_QUIRK_HAS_FRREG,
++		.driver_data = (kernel_ulong_t)&fec_imx27_info,
+ 	}, {
+ 		.name = "imx28-fec",
+-		.driver_data = FEC_QUIRK_ENET_MAC | FEC_QUIRK_SWAP_FRAME |
+-				FEC_QUIRK_SINGLE_MDIO | FEC_QUIRK_HAS_RACC |
+-				FEC_QUIRK_HAS_FRREG,
++		.driver_data = (kernel_ulong_t)&fec_imx28_info,
+ 	}, {
+ 		.name = "imx6q-fec",
+-		.driver_data = FEC_QUIRK_ENET_MAC | FEC_QUIRK_HAS_GBIT |
+-				FEC_QUIRK_HAS_BUFDESC_EX | FEC_QUIRK_HAS_CSUM |
+-				FEC_QUIRK_HAS_VLAN | FEC_QUIRK_ERR006358 |
+-				FEC_QUIRK_HAS_RACC,
++		.driver_data = (kernel_ulong_t)&fec_imx6q_info,
+ 	}, {
+ 		.name = "mvf600-fec",
+-		.driver_data = FEC_QUIRK_ENET_MAC | FEC_QUIRK_HAS_RACC,
++		.driver_data = (kernel_ulong_t)&fec_mvf600_info,
+ 	}, {
+ 		.name = "imx6sx-fec",
+-		.driver_data = FEC_QUIRK_ENET_MAC | FEC_QUIRK_HAS_GBIT |
+-				FEC_QUIRK_HAS_BUFDESC_EX | FEC_QUIRK_HAS_CSUM |
+-				FEC_QUIRK_HAS_VLAN | FEC_QUIRK_HAS_AVB |
+-				FEC_QUIRK_ERR007885 | FEC_QUIRK_BUG_CAPTURE |
+-				FEC_QUIRK_HAS_RACC | FEC_QUIRK_HAS_COALESCE,
++		.driver_data = (kernel_ulong_t)&fec_imx6x_info,
+ 	}, {
+ 		.name = "imx6ul-fec",
+-		.driver_data = FEC_QUIRK_ENET_MAC | FEC_QUIRK_HAS_GBIT |
+-				FEC_QUIRK_HAS_BUFDESC_EX | FEC_QUIRK_HAS_CSUM |
+-				FEC_QUIRK_HAS_VLAN | FEC_QUIRK_ERR007885 |
+-				FEC_QUIRK_BUG_CAPTURE | FEC_QUIRK_HAS_RACC |
+-				FEC_QUIRK_HAS_COALESCE,
++		.driver_data = (kernel_ulong_t)&fec_imx6ul_info,
+ 	}, {
+ 		/* sentinel */
+ 	}
+@@ -1092,11 +1130,28 @@ fec_restart(struct net_device *ndev)
+ 
+ }
+ 
++static void fec_enet_stop_mode(struct fec_enet_private *fep, bool enabled)
++{
++	struct fec_platform_data *pdata = fep->pdev->dev.platform_data;
++	struct fec_stop_mode_gpr *stop_gpr = &fep->stop_gpr;
++
++	if (stop_gpr->gpr) {
++		if (enabled)
++			regmap_update_bits(stop_gpr->gpr, stop_gpr->reg,
++					   BIT(stop_gpr->bit),
++					   BIT(stop_gpr->bit));
++		else
++			regmap_update_bits(stop_gpr->gpr, stop_gpr->reg,
++					   BIT(stop_gpr->bit), 0);
++	} else if (pdata && pdata->sleep_mode_enable) {
++		pdata->sleep_mode_enable(enabled);
++	}
++}
++
+ static void
+ fec_stop(struct net_device *ndev)
+ {
+ 	struct fec_enet_private *fep = netdev_priv(ndev);
+-	struct fec_platform_data *pdata = fep->pdev->dev.platform_data;
+ 	u32 rmii_mode = readl(fep->hwp + FEC_R_CNTRL) & (1 << 8);
+ 	u32 val;
+ 
+@@ -1125,9 +1180,7 @@ fec_stop(struct net_device *ndev)
+ 		val = readl(fep->hwp + FEC_ECNTRL);
+ 		val |= (FEC_ECR_MAGICEN | FEC_ECR_SLEEP);
+ 		writel(val, fep->hwp + FEC_ECNTRL);
+-
+-		if (pdata && pdata->sleep_mode_enable)
+-			pdata->sleep_mode_enable(true);
++		fec_enet_stop_mode(fep, true);
+ 	}
+ 	writel(fep->phy_speed, fep->hwp + FEC_MII_SPEED);
+ 
+@@ -3397,6 +3450,37 @@ static int fec_enet_get_irq_cnt(struct platform_device *pdev)
+ 	return irq_cnt;
+ }
+ 
++static int fec_enet_init_stop_mode(struct fec_enet_private *fep,
++				   struct fec_devinfo *dev_info,
++				   struct device_node *np)
++{
++	struct device_node *gpr_np;
++	int ret = 0;
++
++	if (!dev_info)
++		return 0;
++
++	gpr_np = of_parse_phandle(np, "gpr", 0);
++	if (!gpr_np)
++		return 0;
++
++	fep->stop_gpr.gpr = syscon_node_to_regmap(gpr_np);
++	if (IS_ERR(fep->stop_gpr.gpr)) {
++		dev_err(&fep->pdev->dev, "could not find gpr regmap\n");
++		ret = PTR_ERR(fep->stop_gpr.gpr);
++		fep->stop_gpr.gpr = NULL;
++		goto out;
++	}
++
++	fep->stop_gpr.reg = dev_info->stop_gpr_reg;
++	fep->stop_gpr.bit = dev_info->stop_gpr_bit;
++
++out:
++	of_node_put(gpr_np);
++
++	return ret;
++}
++
+ static int
+ fec_probe(struct platform_device *pdev)
+ {
+@@ -3412,6 +3496,7 @@ fec_probe(struct platform_device *pdev)
+ 	int num_rx_qs;
+ 	char irq_name[8];
+ 	int irq_cnt;
++	struct fec_devinfo *dev_info;
+ 
+ 	fec_enet_get_queue_num(pdev, &num_tx_qs, &num_rx_qs);
+ 
+@@ -3429,7 +3514,9 @@ fec_probe(struct platform_device *pdev)
+ 	of_id = of_match_device(fec_dt_ids, &pdev->dev);
+ 	if (of_id)
+ 		pdev->id_entry = of_id->data;
+-	fep->quirks = pdev->id_entry->driver_data;
++	dev_info = (struct fec_devinfo *)pdev->id_entry->driver_data;
++	if (dev_info)
++		fep->quirks = dev_info->quirks;
+ 
+ 	fep->netdev = ndev;
+ 	fep->num_rx_queues = num_rx_qs;
+@@ -3463,6 +3550,10 @@ fec_probe(struct platform_device *pdev)
+ 	if (of_get_property(np, "fsl,magic-packet", NULL))
+ 		fep->wol_flag |= FEC_WOL_HAS_MAGIC_PACKET;
+ 
++	ret = fec_enet_init_stop_mode(fep, dev_info, np);
++	if (ret)
++		goto failed_stop_mode;
++
+ 	phy_node = of_parse_phandle(np, "phy-handle", 0);
+ 	if (!phy_node && of_phy_is_fixed_link(np)) {
+ 		ret = of_phy_register_fixed_link(np);
+@@ -3631,6 +3722,7 @@ failed_clk:
+ 	if (of_phy_is_fixed_link(np))
+ 		of_phy_deregister_fixed_link(np);
+ 	of_node_put(phy_node);
++failed_stop_mode:
+ failed_phy:
+ 	dev_id--;
+ failed_ioremap:
+@@ -3708,7 +3800,6 @@ static int __maybe_unused fec_resume(struct device *dev)
+ {
+ 	struct net_device *ndev = dev_get_drvdata(dev);
+ 	struct fec_enet_private *fep = netdev_priv(ndev);
+-	struct fec_platform_data *pdata = fep->pdev->dev.platform_data;
+ 	int ret;
+ 	int val;
+ 
+@@ -3726,8 +3817,8 @@ static int __maybe_unused fec_resume(struct device *dev)
+ 			goto failed_clk;
+ 		}
+ 		if (fep->wol_flag & FEC_WOL_FLAG_ENABLE) {
+-			if (pdata && pdata->sleep_mode_enable)
+-				pdata->sleep_mode_enable(false);
++			fec_enet_stop_mode(fep, false);
++
+ 			val = readl(fep->hwp + FEC_ECNTRL);
+ 			val &= ~(FEC_ECR_MAGICEN | FEC_ECR_SLEEP);
+ 			writel(val, fep->hwp + FEC_ECNTRL);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
+index 94d7b69a95c7..eb2e57ff08a6 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/diag/fw_tracer.c
+@@ -935,7 +935,7 @@ struct mlx5_fw_tracer *mlx5_fw_tracer_create(struct mlx5_core_dev *dev)
+ 		return NULL;
+ 	}
+ 
+-	tracer = kzalloc(sizeof(*tracer), GFP_KERNEL);
++	tracer = kvzalloc(sizeof(*tracer), GFP_KERNEL);
+ 	if (!tracer)
+ 		return ERR_PTR(-ENOMEM);
+ 
+@@ -982,7 +982,7 @@ destroy_workqueue:
+ 	tracer->dev = NULL;
+ 	destroy_workqueue(tracer->work_queue);
+ free_tracer:
+-	kfree(tracer);
++	kvfree(tracer);
+ 	return ERR_PTR(err);
+ }
+ 
+@@ -1061,7 +1061,7 @@ void mlx5_fw_tracer_destroy(struct mlx5_fw_tracer *tracer)
+ 	mlx5_fw_tracer_destroy_log_buf(tracer);
+ 	flush_workqueue(tracer->work_queue);
+ 	destroy_workqueue(tracer->work_queue);
+-	kfree(tracer);
++	kvfree(tracer);
+ }
+ 
+ static int fw_tracer_event(struct notifier_block *nb, unsigned long action, void *data)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
+index ddd2409fc8be..5a5e6a21c6e1 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
+@@ -367,6 +367,7 @@ enum {
+ 	MLX5E_SQ_STATE_AM,
+ 	MLX5E_SQ_STATE_TLS,
+ 	MLX5E_SQ_STATE_VLAN_NEED_L2_INLINE,
++	MLX5E_SQ_STATE_PENDING_XSK_TX,
+ };
+ 
+ struct mlx5e_sq_wqe_info {
+@@ -950,7 +951,7 @@ void mlx5e_page_release_dynamic(struct mlx5e_rq *rq,
+ void mlx5e_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe);
+ void mlx5e_handle_rx_cqe_mpwrq(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe);
+ bool mlx5e_post_rx_wqes(struct mlx5e_rq *rq);
+-void mlx5e_poll_ico_cq(struct mlx5e_cq *cq);
++int mlx5e_poll_ico_cq(struct mlx5e_cq *cq);
+ bool mlx5e_post_rx_mpwqes(struct mlx5e_rq *rq);
+ void mlx5e_dealloc_rx_wqe(struct mlx5e_rq *rq, u16 ix);
+ void mlx5e_dealloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/tx.c
+index fe2d596cb361..3bcdb5b2fc20 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/tx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/tx.c
+@@ -33,6 +33,9 @@ int mlx5e_xsk_wakeup(struct net_device *dev, u32 qid, u32 flags)
+ 		if (unlikely(!test_bit(MLX5E_SQ_STATE_ENABLED, &c->xskicosq.state)))
+ 			return 0;
+ 
++		if (test_and_set_bit(MLX5E_SQ_STATE_PENDING_XSK_TX, &c->xskicosq.state))
++			return 0;
++
+ 		spin_lock(&c->xskicosq_lock);
+ 		mlx5e_trigger_irq(&c->xskicosq);
+ 		spin_unlock(&c->xskicosq_lock);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index 265073996432..d02db5aebac4 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -3568,7 +3568,12 @@ mlx5e_get_stats(struct net_device *dev, struct rtnl_link_stats64 *stats)
+ 	struct mlx5e_vport_stats *vstats = &priv->stats.vport;
+ 	struct mlx5e_pport_stats *pstats = &priv->stats.pport;
+ 
+-	if (!mlx5e_monitor_counter_supported(priv)) {
++	/* In switchdev mode, monitor counters doesn't monitor
++	 * rx/tx stats of 802_3. The update stats mechanism
++	 * should keep the 802_3 layout counters updated
++	 */
++	if (!mlx5e_monitor_counter_supported(priv) ||
++	    mlx5e_is_uplink_rep(priv)) {
+ 		/* update HW stats in background for next time */
+ 		mlx5e_queue_update_stats(priv);
+ 	}
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+index 312d4692425b..a9a96a630e4d 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+@@ -587,7 +587,7 @@ bool mlx5e_post_rx_wqes(struct mlx5e_rq *rq)
+ 	return !!err;
+ }
+ 
+-void mlx5e_poll_ico_cq(struct mlx5e_cq *cq)
++int mlx5e_poll_ico_cq(struct mlx5e_cq *cq)
+ {
+ 	struct mlx5e_icosq *sq = container_of(cq, struct mlx5e_icosq, cq);
+ 	struct mlx5_cqe64 *cqe;
+@@ -595,11 +595,11 @@ void mlx5e_poll_ico_cq(struct mlx5e_cq *cq)
+ 	int i;
+ 
+ 	if (unlikely(!test_bit(MLX5E_SQ_STATE_ENABLED, &sq->state)))
+-		return;
++		return 0;
+ 
+ 	cqe = mlx5_cqwq_get_cqe(&cq->wq);
+ 	if (likely(!cqe))
+-		return;
++		return 0;
+ 
+ 	/* sq->cc must be updated only after mlx5_cqwq_update_db_record(),
+ 	 * otherwise a cq overrun may occur
+@@ -648,6 +648,8 @@ void mlx5e_poll_ico_cq(struct mlx5e_cq *cq)
+ 	sq->cc = sqcc;
+ 
+ 	mlx5_cqwq_update_db_record(&cq->wq);
++
++	return i;
+ }
+ 
+ bool mlx5e_post_rx_mpwqes(struct mlx5e_rq *rq)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c
+index 800d34ed8a96..76efa9579215 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c
+@@ -145,7 +145,11 @@ int mlx5e_napi_poll(struct napi_struct *napi, int budget)
+ 
+ 	busy |= rq->post_wqes(rq);
+ 	if (xsk_open) {
+-		mlx5e_poll_ico_cq(&c->xskicosq.cq);
++		if (mlx5e_poll_ico_cq(&c->xskicosq.cq))
++			/* Don't clear the flag if nothing was polled to prevent
++			 * queueing more WQEs and overflowing XSKICOSQ.
++			 */
++			clear_bit(MLX5E_SQ_STATE_PENDING_XSK_TX, &c->xskicosq.state);
+ 		busy |= mlx5e_poll_xdpsq_cq(&xsksq->cq);
+ 		busy_xsk |= mlx5e_napi_xsk_post(xsksq, xskrq);
+ 	}
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_dev.c b/drivers/net/ethernet/qlogic/qed/qed_dev.c
+index 03bdd2e26329..38a65b984e47 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_dev.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_dev.c
+@@ -4691,26 +4691,20 @@ static void qed_chain_free_single(struct qed_dev *cdev,
+ 
+ static void qed_chain_free_pbl(struct qed_dev *cdev, struct qed_chain *p_chain)
+ {
+-	void **pp_virt_addr_tbl = p_chain->pbl.pp_virt_addr_tbl;
++	struct addr_tbl_entry *pp_addr_tbl = p_chain->pbl.pp_addr_tbl;
+ 	u32 page_cnt = p_chain->page_cnt, i, pbl_size;
+-	u8 *p_pbl_virt = p_chain->pbl_sp.p_virt_table;
+ 
+-	if (!pp_virt_addr_tbl)
++	if (!pp_addr_tbl)
+ 		return;
+ 
+-	if (!p_pbl_virt)
+-		goto out;
+-
+ 	for (i = 0; i < page_cnt; i++) {
+-		if (!pp_virt_addr_tbl[i])
++		if (!pp_addr_tbl[i].virt_addr || !pp_addr_tbl[i].dma_map)
+ 			break;
+ 
+ 		dma_free_coherent(&cdev->pdev->dev,
+ 				  QED_CHAIN_PAGE_SIZE,
+-				  pp_virt_addr_tbl[i],
+-				  *(dma_addr_t *)p_pbl_virt);
+-
+-		p_pbl_virt += QED_CHAIN_PBL_ENTRY_SIZE;
++				  pp_addr_tbl[i].virt_addr,
++				  pp_addr_tbl[i].dma_map);
+ 	}
+ 
+ 	pbl_size = page_cnt * QED_CHAIN_PBL_ENTRY_SIZE;
+@@ -4720,9 +4714,9 @@ static void qed_chain_free_pbl(struct qed_dev *cdev, struct qed_chain *p_chain)
+ 				  pbl_size,
+ 				  p_chain->pbl_sp.p_virt_table,
+ 				  p_chain->pbl_sp.p_phys_table);
+-out:
+-	vfree(p_chain->pbl.pp_virt_addr_tbl);
+-	p_chain->pbl.pp_virt_addr_tbl = NULL;
++
++	vfree(p_chain->pbl.pp_addr_tbl);
++	p_chain->pbl.pp_addr_tbl = NULL;
+ }
+ 
+ void qed_chain_free(struct qed_dev *cdev, struct qed_chain *p_chain)
+@@ -4823,19 +4817,19 @@ qed_chain_alloc_pbl(struct qed_dev *cdev,
+ {
+ 	u32 page_cnt = p_chain->page_cnt, size, i;
+ 	dma_addr_t p_phys = 0, p_pbl_phys = 0;
+-	void **pp_virt_addr_tbl = NULL;
++	struct addr_tbl_entry *pp_addr_tbl;
+ 	u8 *p_pbl_virt = NULL;
+ 	void *p_virt = NULL;
+ 
+-	size = page_cnt * sizeof(*pp_virt_addr_tbl);
+-	pp_virt_addr_tbl = vzalloc(size);
+-	if (!pp_virt_addr_tbl)
++	size = page_cnt * sizeof(*pp_addr_tbl);
++	pp_addr_tbl =  vzalloc(size);
++	if (!pp_addr_tbl)
+ 		return -ENOMEM;
+ 
+ 	/* The allocation of the PBL table is done with its full size, since it
+ 	 * is expected to be successive.
+ 	 * qed_chain_init_pbl_mem() is called even in a case of an allocation
+-	 * failure, since pp_virt_addr_tbl was previously allocated, and it
++	 * failure, since tbl was previously allocated, and it
+ 	 * should be saved to allow its freeing during the error flow.
+ 	 */
+ 	size = page_cnt * QED_CHAIN_PBL_ENTRY_SIZE;
+@@ -4849,8 +4843,7 @@ qed_chain_alloc_pbl(struct qed_dev *cdev,
+ 		p_chain->b_external_pbl = true;
+ 	}
+ 
+-	qed_chain_init_pbl_mem(p_chain, p_pbl_virt, p_pbl_phys,
+-			       pp_virt_addr_tbl);
++	qed_chain_init_pbl_mem(p_chain, p_pbl_virt, p_pbl_phys, pp_addr_tbl);
+ 	if (!p_pbl_virt)
+ 		return -ENOMEM;
+ 
+@@ -4869,7 +4862,8 @@ qed_chain_alloc_pbl(struct qed_dev *cdev,
+ 		/* Fill the PBL table with the physical address of the page */
+ 		*(dma_addr_t *)p_pbl_virt = p_phys;
+ 		/* Keep the virtual address of the page */
+-		p_chain->pbl.pp_virt_addr_tbl[i] = p_virt;
++		p_chain->pbl.pp_addr_tbl[i].virt_addr = p_virt;
++		p_chain->pbl.pp_addr_tbl[i].dma_map = p_phys;
+ 
+ 		p_pbl_virt += QED_CHAIN_PBL_ENTRY_SIZE;
+ 	}
+diff --git a/drivers/net/ethernet/qlogic/qed/qed_main.c b/drivers/net/ethernet/qlogic/qed/qed_main.c
+index 2c189c637cca..96356e897c80 100644
+--- a/drivers/net/ethernet/qlogic/qed/qed_main.c
++++ b/drivers/net/ethernet/qlogic/qed/qed_main.c
+@@ -1087,9 +1087,6 @@ static void qed_update_pf_params(struct qed_dev *cdev,
+ #define QED_PERIODIC_DB_REC_INTERVAL_MS		100
+ #define QED_PERIODIC_DB_REC_INTERVAL \
+ 	msecs_to_jiffies(QED_PERIODIC_DB_REC_INTERVAL_MS)
+-#define QED_PERIODIC_DB_REC_WAIT_COUNT		10
+-#define QED_PERIODIC_DB_REC_WAIT_INTERVAL \
+-	(QED_PERIODIC_DB_REC_INTERVAL_MS / QED_PERIODIC_DB_REC_WAIT_COUNT)
+ 
+ static int qed_slowpath_delayed_work(struct qed_hwfn *hwfn,
+ 				     enum qed_slowpath_wq_flag wq_flag,
+@@ -1123,7 +1120,7 @@ void qed_periodic_db_rec_start(struct qed_hwfn *p_hwfn)
+ 
+ static void qed_slowpath_wq_stop(struct qed_dev *cdev)
+ {
+-	int i, sleep_count = QED_PERIODIC_DB_REC_WAIT_COUNT;
++	int i;
+ 
+ 	if (IS_VF(cdev))
+ 		return;
+@@ -1135,13 +1132,7 @@ static void qed_slowpath_wq_stop(struct qed_dev *cdev)
+ 		/* Stop queuing new delayed works */
+ 		cdev->hwfns[i].slowpath_wq_active = false;
+ 
+-		/* Wait until the last periodic doorbell recovery is executed */
+-		while (test_bit(QED_SLOWPATH_PERIODIC_DB_REC,
+-				&cdev->hwfns[i].slowpath_task_flags) &&
+-		       sleep_count--)
+-			msleep(QED_PERIODIC_DB_REC_WAIT_INTERVAL);
+-
+-		flush_workqueue(cdev->hwfns[i].slowpath_wq);
++		cancel_delayed_work(&cdev->hwfns[i].slowpath_task);
+ 		destroy_workqueue(cdev->hwfns[i].slowpath_wq);
+ 	}
+ }
+diff --git a/drivers/net/ethernet/sfc/efx_common.c b/drivers/net/ethernet/sfc/efx_common.c
+index b0d76bc19673..1799ff9a45d9 100644
+--- a/drivers/net/ethernet/sfc/efx_common.c
++++ b/drivers/net/ethernet/sfc/efx_common.c
+@@ -200,11 +200,11 @@ void efx_link_status_changed(struct efx_nic *efx)
+ unsigned int efx_xdp_max_mtu(struct efx_nic *efx)
+ {
+ 	/* The maximum MTU that we can fit in a single page, allowing for
+-	 * framing, overhead and XDP headroom.
++	 * framing, overhead and XDP headroom + tailroom.
+ 	 */
+ 	int overhead = EFX_MAX_FRAME_LEN(0) + sizeof(struct efx_rx_page_state) +
+ 		       efx->rx_prefix_size + efx->type->rx_buffer_padding +
+-		       efx->rx_ip_align + XDP_PACKET_HEADROOM;
++		       efx->rx_ip_align + EFX_XDP_HEADROOM + EFX_XDP_TAILROOM;
+ 
+ 	return PAGE_SIZE - overhead;
+ }
+@@ -302,8 +302,9 @@ static void efx_start_datapath(struct efx_nic *efx)
+ 	efx->rx_dma_len = (efx->rx_prefix_size +
+ 			   EFX_MAX_FRAME_LEN(efx->net_dev->mtu) +
+ 			   efx->type->rx_buffer_padding);
+-	rx_buf_len = (sizeof(struct efx_rx_page_state) + XDP_PACKET_HEADROOM +
+-		      efx->rx_ip_align + efx->rx_dma_len);
++	rx_buf_len = (sizeof(struct efx_rx_page_state)   + EFX_XDP_HEADROOM +
++		      efx->rx_ip_align + efx->rx_dma_len + EFX_XDP_TAILROOM);
++
+ 	if (rx_buf_len <= PAGE_SIZE) {
+ 		efx->rx_scatter = efx->type->always_rx_scatter;
+ 		efx->rx_buffer_order = 0;
+diff --git a/drivers/net/ethernet/sfc/net_driver.h b/drivers/net/ethernet/sfc/net_driver.h
+index 8164f0edcbf0..c8dcba482d89 100644
+--- a/drivers/net/ethernet/sfc/net_driver.h
++++ b/drivers/net/ethernet/sfc/net_driver.h
+@@ -91,6 +91,12 @@
+ #define EFX_RX_BUF_ALIGNMENT	4
+ #endif
+ 
++/* Non-standard XDP_PACKET_HEADROOM and tailroom to satisfy XDP_REDIRECT and
++ * still fit two standard MTU size packets into a single 4K page.
++ */
++#define EFX_XDP_HEADROOM	128
++#define EFX_XDP_TAILROOM	SKB_DATA_ALIGN(sizeof(struct skb_shared_info))
++
+ /* Forward declare Precision Time Protocol (PTP) support structure. */
+ struct efx_ptp_data;
+ struct hwtstamp_config;
+diff --git a/drivers/net/ethernet/sfc/rx.c b/drivers/net/ethernet/sfc/rx.c
+index a2042f16babc..260352d97d9d 100644
+--- a/drivers/net/ethernet/sfc/rx.c
++++ b/drivers/net/ethernet/sfc/rx.c
+@@ -302,7 +302,7 @@ static bool efx_do_xdp(struct efx_nic *efx, struct efx_channel *channel,
+ 	       efx->rx_prefix_size);
+ 
+ 	xdp.data = *ehp;
+-	xdp.data_hard_start = xdp.data - XDP_PACKET_HEADROOM;
++	xdp.data_hard_start = xdp.data - EFX_XDP_HEADROOM;
+ 
+ 	/* No support yet for XDP metadata */
+ 	xdp_set_data_meta_invalid(&xdp);
+diff --git a/drivers/net/ethernet/sfc/rx_common.c b/drivers/net/ethernet/sfc/rx_common.c
+index ee8beb87bdc1..e10c23833515 100644
+--- a/drivers/net/ethernet/sfc/rx_common.c
++++ b/drivers/net/ethernet/sfc/rx_common.c
+@@ -412,10 +412,10 @@ static int efx_init_rx_buffers(struct efx_rx_queue *rx_queue, bool atomic)
+ 			index = rx_queue->added_count & rx_queue->ptr_mask;
+ 			rx_buf = efx_rx_buffer(rx_queue, index);
+ 			rx_buf->dma_addr = dma_addr + efx->rx_ip_align +
+-					   XDP_PACKET_HEADROOM;
++					   EFX_XDP_HEADROOM;
+ 			rx_buf->page = page;
+ 			rx_buf->page_offset = page_offset + efx->rx_ip_align +
+-					      XDP_PACKET_HEADROOM;
++					      EFX_XDP_HEADROOM;
+ 			rx_buf->len = efx->rx_dma_len;
+ 			rx_buf->flags = 0;
+ 			++rx_queue->added_count;
+@@ -433,7 +433,7 @@ static int efx_init_rx_buffers(struct efx_rx_queue *rx_queue, bool atomic)
+ void efx_rx_config_page_split(struct efx_nic *efx)
+ {
+ 	efx->rx_page_buf_step = ALIGN(efx->rx_dma_len + efx->rx_ip_align +
+-				      XDP_PACKET_HEADROOM,
++				      EFX_XDP_HEADROOM + EFX_XDP_TAILROOM,
+ 				      EFX_RX_BUF_ALIGNMENT);
+ 	efx->rx_bufs_per_page = efx->rx_buffer_order ? 1 :
+ 		((PAGE_SIZE - sizeof(struct efx_rx_page_state)) /
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
+index e0212d2fc2a1..fa32cd5b418e 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
+@@ -241,6 +241,8 @@ static int socfpga_set_phy_mode_common(int phymode, u32 *val)
+ 	switch (phymode) {
+ 	case PHY_INTERFACE_MODE_RGMII:
+ 	case PHY_INTERFACE_MODE_RGMII_ID:
++	case PHY_INTERFACE_MODE_RGMII_RXID:
++	case PHY_INTERFACE_MODE_RGMII_TXID:
+ 		*val = SYSMGR_EMACGRP_CTRL_PHYSEL_ENUM_RGMII;
+ 		break;
+ 	case PHY_INTERFACE_MODE_MII:
+diff --git a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
+index 23627c953a5e..436f501be937 100644
+--- a/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
++++ b/drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c
+@@ -729,9 +729,18 @@ static int brcmf_net_mon_stop(struct net_device *ndev)
+ 	return err;
+ }
+ 
++static netdev_tx_t brcmf_net_mon_start_xmit(struct sk_buff *skb,
++					    struct net_device *ndev)
++{
++	dev_kfree_skb_any(skb);
++
++	return NETDEV_TX_OK;
++}
++
+ static const struct net_device_ops brcmf_netdev_ops_mon = {
+ 	.ndo_open = brcmf_net_mon_open,
+ 	.ndo_stop = brcmf_net_mon_stop,
++	.ndo_start_xmit = brcmf_net_mon_start_xmit,
+ };
+ 
+ int brcmf_net_mon_attach(struct brcmf_if *ifp)
+diff --git a/drivers/net/wireless/intel/iwlwifi/iwl-drv.c b/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
+index 0481796f75bc..c24350222133 100644
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-drv.c
+@@ -1467,7 +1467,7 @@ static void iwl_req_fw_callback(const struct firmware *ucode_raw, void *context)
+ 				kmemdup(pieces->dbg_conf_tlv[i],
+ 					pieces->dbg_conf_tlv_len[i],
+ 					GFP_KERNEL);
+-			if (!pieces->dbg_conf_tlv[i])
++			if (!drv->fw.dbg.conf_tlv[i])
+ 				goto out_free_fw;
+ 		}
+ 	}
+diff --git a/drivers/net/wireless/realtek/rtw88/pci.c b/drivers/net/wireless/realtek/rtw88/pci.c
+index 1fbc14c149ec..fbaad23e8eb1 100644
+--- a/drivers/net/wireless/realtek/rtw88/pci.c
++++ b/drivers/net/wireless/realtek/rtw88/pci.c
+@@ -1287,22 +1287,17 @@ static void rtw_pci_phy_cfg(struct rtw_dev *rtwdev)
+ 	rtw_pci_link_cfg(rtwdev);
+ }
+ 
+-#ifdef CONFIG_PM
+-static int rtw_pci_suspend(struct device *dev)
++static int __maybe_unused rtw_pci_suspend(struct device *dev)
+ {
+ 	return 0;
+ }
+ 
+-static int rtw_pci_resume(struct device *dev)
++static int __maybe_unused rtw_pci_resume(struct device *dev)
+ {
+ 	return 0;
+ }
+ 
+ static SIMPLE_DEV_PM_OPS(rtw_pm_ops, rtw_pci_suspend, rtw_pci_resume);
+-#define RTW_PM_OPS (&rtw_pm_ops)
+-#else
+-#define RTW_PM_OPS NULL
+-#endif
+ 
+ static int rtw_pci_claim(struct rtw_dev *rtwdev, struct pci_dev *pdev)
+ {
+@@ -1530,7 +1525,7 @@ static struct pci_driver rtw_pci_driver = {
+ 	.id_table = rtw_pci_id_table,
+ 	.probe = rtw_pci_probe,
+ 	.remove = rtw_pci_remove,
+-	.driver.pm = RTW_PM_OPS,
++	.driver.pm = &rtw_pm_ops,
+ };
+ module_pci_driver(rtw_pci_driver);
+ 
+diff --git a/drivers/pci/quirks.c b/drivers/pci/quirks.c
+index b7347bc6a24d..ca9ed5774eb1 100644
+--- a/drivers/pci/quirks.c
++++ b/drivers/pci/quirks.c
+@@ -4465,6 +4465,29 @@ static int pci_quirk_xgene_acs(struct pci_dev *dev, u16 acs_flags)
+ 		PCI_ACS_SV | PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF);
+ }
+ 
++/*
++ * Many Zhaoxin Root Ports and Switch Downstream Ports have no ACS capability.
++ * But the implementation could block peer-to-peer transactions between them
++ * and provide ACS-like functionality.
++ */
++static int  pci_quirk_zhaoxin_pcie_ports_acs(struct pci_dev *dev, u16 acs_flags)
++{
++	if (!pci_is_pcie(dev) ||
++	    ((pci_pcie_type(dev) != PCI_EXP_TYPE_ROOT_PORT) &&
++	     (pci_pcie_type(dev) != PCI_EXP_TYPE_DOWNSTREAM)))
++		return -ENOTTY;
++
++	switch (dev->device) {
++	case 0x0710 ... 0x071e:
++	case 0x0721:
++	case 0x0723 ... 0x0732:
++		return pci_acs_ctrl_enabled(acs_flags,
++			PCI_ACS_SV | PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF);
++	}
++
++	return false;
++}
++
+ /*
+  * Many Intel PCH Root Ports do provide ACS-like features to disable peer
+  * transactions and validate bus numbers in requests, but do not provide an
+@@ -4767,6 +4790,12 @@ static const struct pci_dev_acs_enabled {
+ 	{ PCI_VENDOR_ID_BROADCOM, 0xD714, pci_quirk_brcm_acs },
+ 	/* Amazon Annapurna Labs */
+ 	{ PCI_VENDOR_ID_AMAZON_ANNAPURNA_LABS, 0x0031, pci_quirk_al_acs },
++	/* Zhaoxin multi-function devices */
++	{ PCI_VENDOR_ID_ZHAOXIN, 0x3038, pci_quirk_mf_endpoint_acs },
++	{ PCI_VENDOR_ID_ZHAOXIN, 0x3104, pci_quirk_mf_endpoint_acs },
++	{ PCI_VENDOR_ID_ZHAOXIN, 0x9083, pci_quirk_mf_endpoint_acs },
++	/* Zhaoxin Root/Downstream Ports */
++	{ PCI_VENDOR_ID_ZHAOXIN, PCI_ANY_ID, pci_quirk_zhaoxin_pcie_ports_acs },
+ 	{ 0 }
+ };
+ 
+@@ -5527,3 +5556,21 @@ out_disable:
+ DECLARE_PCI_FIXUP_CLASS_FINAL(PCI_VENDOR_ID_NVIDIA, 0x13b1,
+ 			      PCI_CLASS_DISPLAY_VGA, 8,
+ 			      quirk_reset_lenovo_thinkpad_p50_nvgpu);
++
++/*
++ * Device [1b21:2142]
++ * When in D0, PME# doesn't get asserted when plugging USB 3.0 device.
++ */
++static void pci_fixup_no_d0_pme(struct pci_dev *dev)
++{
++	pci_info(dev, "PME# does not work under D0, disabling it\n");
++	dev->pme_support &= ~(PCI_PM_CAP_PME_D0 >> PCI_PM_CAP_PME_SHIFT);
++}
++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ASMEDIA, 0x2142, pci_fixup_no_d0_pme);
++
++static void apex_pci_fixup_class(struct pci_dev *pdev)
++{
++	pdev->class = (PCI_CLASS_SYSTEM_OTHER << 8) | pdev->class;
++}
++DECLARE_PCI_FIXUP_CLASS_HEADER(0x1ac1, 0x089a,
++			       PCI_CLASS_NOT_DEFINED, 8, apex_pci_fixup_class);
+diff --git a/drivers/remoteproc/mtk_common.h b/drivers/remoteproc/mtk_common.h
+index deb20096146a..0066c83636d0 100644
+--- a/drivers/remoteproc/mtk_common.h
++++ b/drivers/remoteproc/mtk_common.h
+@@ -68,7 +68,7 @@ struct mtk_scp {
+ 	wait_queue_head_t ack_wq;
+ 
+ 	void __iomem *cpu_addr;
+-	phys_addr_t phys_addr;
++	dma_addr_t dma_addr;
+ 	size_t dram_size;
+ 
+ 	struct rproc_subdev *rpmsg_subdev;
+diff --git a/drivers/remoteproc/mtk_scp.c b/drivers/remoteproc/mtk_scp.c
+index 7ccdf64ff3ea..a6327617868e 100644
+--- a/drivers/remoteproc/mtk_scp.c
++++ b/drivers/remoteproc/mtk_scp.c
+@@ -330,7 +330,7 @@ static void *scp_da_to_va(struct rproc *rproc, u64 da, int len)
+ 		if (offset >= 0 && (offset + len) < scp->sram_size)
+ 			return (void __force *)scp->sram_base + offset;
+ 	} else {
+-		offset = da - scp->phys_addr;
++		offset = da - scp->dma_addr;
+ 		if (offset >= 0 && (offset + len) < scp->dram_size)
+ 			return (void __force *)scp->cpu_addr + offset;
+ 	}
+@@ -451,7 +451,7 @@ static int scp_map_memory_region(struct mtk_scp *scp)
+ 	/* Reserved SCP code size */
+ 	scp->dram_size = MAX_CODE_SIZE;
+ 	scp->cpu_addr = dma_alloc_coherent(scp->dev, scp->dram_size,
+-					   &scp->phys_addr, GFP_KERNEL);
++					   &scp->dma_addr, GFP_KERNEL);
+ 	if (!scp->cpu_addr)
+ 		return -ENOMEM;
+ 
+@@ -461,7 +461,7 @@ static int scp_map_memory_region(struct mtk_scp *scp)
+ static void scp_unmap_memory_region(struct mtk_scp *scp)
+ {
+ 	dma_free_coherent(scp->dev, scp->dram_size, scp->cpu_addr,
+-			  scp->phys_addr);
++			  scp->dma_addr);
+ 	of_reserved_mem_device_release(scp->dev);
+ }
+ 
+diff --git a/drivers/soc/xilinx/Kconfig b/drivers/soc/xilinx/Kconfig
+index 223f1f9d0922..646512d7276f 100644
+--- a/drivers/soc/xilinx/Kconfig
++++ b/drivers/soc/xilinx/Kconfig
+@@ -19,7 +19,7 @@ config XILINX_VCU
+ 
+ config ZYNQMP_POWER
+ 	bool "Enable Xilinx Zynq MPSoC Power Management driver"
+-	depends on PM && ARCH_ZYNQMP
++	depends on PM && ZYNQMP_FIRMWARE
+ 	default y
+ 	select MAILBOX
+ 	select ZYNQMP_IPI_MBOX
+@@ -35,7 +35,7 @@ config ZYNQMP_POWER
+ config ZYNQMP_PM_DOMAINS
+ 	bool "Enable Zynq MPSoC generic PM domains"
+ 	default y
+-	depends on PM && ARCH_ZYNQMP && ZYNQMP_FIRMWARE
++	depends on PM && ZYNQMP_FIRMWARE
+ 	select PM_GENERIC_DOMAINS
+ 	help
+ 	  Say yes to enable device power management through PM domains
+diff --git a/drivers/staging/gasket/apex_driver.c b/drivers/staging/gasket/apex_driver.c
+index 46199c8ca441..f12f81c8dd2f 100644
+--- a/drivers/staging/gasket/apex_driver.c
++++ b/drivers/staging/gasket/apex_driver.c
+@@ -570,13 +570,6 @@ static const struct pci_device_id apex_pci_ids[] = {
+ 	{ PCI_DEVICE(APEX_PCI_VENDOR_ID, APEX_PCI_DEVICE_ID) }, { 0 }
+ };
+ 
+-static void apex_pci_fixup_class(struct pci_dev *pdev)
+-{
+-	pdev->class = (PCI_CLASS_SYSTEM_OTHER << 8) | pdev->class;
+-}
+-DECLARE_PCI_FIXUP_CLASS_HEADER(APEX_PCI_VENDOR_ID, APEX_PCI_DEVICE_ID,
+-			       PCI_CLASS_NOT_DEFINED, 8, apex_pci_fixup_class);
+-
+ static int apex_pci_probe(struct pci_dev *pci_dev,
+ 			  const struct pci_device_id *id)
+ {
+diff --git a/drivers/target/target_core_fabric_lib.c b/drivers/target/target_core_fabric_lib.c
+index 6b4b354c88aa..b5c970faf585 100644
+--- a/drivers/target/target_core_fabric_lib.c
++++ b/drivers/target/target_core_fabric_lib.c
+@@ -63,7 +63,7 @@ static int fc_get_pr_transport_id(
+ 	 * encoded TransportID.
+ 	 */
+ 	ptr = &se_nacl->initiatorname[0];
+-	for (i = 0; i < 24; ) {
++	for (i = 0; i < 23; ) {
+ 		if (!strncmp(&ptr[i], ":", 1)) {
+ 			i++;
+ 			continue;
+diff --git a/drivers/target/target_core_user.c b/drivers/target/target_core_user.c
+index 0b9dfa6b17bc..f769bb1e3735 100644
+--- a/drivers/target/target_core_user.c
++++ b/drivers/target/target_core_user.c
+@@ -2073,6 +2073,7 @@ static void tcmu_reset_ring(struct tcmu_dev *udev, u8 err_level)
+ 	mb->cmd_tail = 0;
+ 	mb->cmd_head = 0;
+ 	tcmu_flush_dcache_range(mb, sizeof(*mb));
++	clear_bit(TCMU_DEV_BIT_BROKEN, &udev->flags);
+ 
+ 	del_timer(&udev->cmd_timer);
+ 
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index 9460d42f8675..c4be4631937a 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -1728,7 +1728,6 @@ static int __dwc3_gadget_wakeup(struct dwc3 *dwc)
+ 	u32			reg;
+ 
+ 	u8			link_state;
+-	u8			speed;
+ 
+ 	/*
+ 	 * According to the Databook Remote wakeup request should
+@@ -1738,16 +1737,13 @@ static int __dwc3_gadget_wakeup(struct dwc3 *dwc)
+ 	 */
+ 	reg = dwc3_readl(dwc->regs, DWC3_DSTS);
+ 
+-	speed = reg & DWC3_DSTS_CONNECTSPD;
+-	if ((speed == DWC3_DSTS_SUPERSPEED) ||
+-	    (speed == DWC3_DSTS_SUPERSPEED_PLUS))
+-		return 0;
+-
+ 	link_state = DWC3_DSTS_USBLNKST(reg);
+ 
+ 	switch (link_state) {
++	case DWC3_LINK_STATE_RESET:
+ 	case DWC3_LINK_STATE_RX_DET:	/* in HS, means Early Suspend */
+ 	case DWC3_LINK_STATE_U3:	/* in HS, means SUSPEND */
++	case DWC3_LINK_STATE_RESUME:
+ 		break;
+ 	default:
+ 		return -EINVAL;
+diff --git a/drivers/usb/gadget/udc/atmel_usba_udc.c b/drivers/usb/gadget/udc/atmel_usba_udc.c
+index 6e0432141c40..22200341c8ec 100644
+--- a/drivers/usb/gadget/udc/atmel_usba_udc.c
++++ b/drivers/usb/gadget/udc/atmel_usba_udc.c
+@@ -1951,10 +1951,10 @@ static irqreturn_t usba_vbus_irq_thread(int irq, void *devid)
+ 			usba_start(udc);
+ 		} else {
+ 			udc->suspended = false;
+-			usba_stop(udc);
+-
+ 			if (udc->driver->disconnect)
+ 				udc->driver->disconnect(&udc->gadget);
++
++			usba_stop(udc);
+ 		}
+ 		udc->vbus_prev = vbus;
+ 	}
+diff --git a/drivers/usb/gadget/udc/bdc/bdc_ep.c b/drivers/usb/gadget/udc/bdc/bdc_ep.c
+index a4d9b5e1e50e..d49c6dc1082d 100644
+--- a/drivers/usb/gadget/udc/bdc/bdc_ep.c
++++ b/drivers/usb/gadget/udc/bdc/bdc_ep.c
+@@ -540,7 +540,7 @@ static void bdc_req_complete(struct bdc_ep *ep, struct bdc_req *req,
+ {
+ 	struct bdc *bdc = ep->bdc;
+ 
+-	if (req == NULL  || &req->queue == NULL || &req->usb_req == NULL)
++	if (req == NULL)
+ 		return;
+ 
+ 	dev_dbg(bdc->dev, "%s ep:%s status:%d\n", __func__, ep->name, status);
+diff --git a/drivers/xen/xenbus/xenbus_client.c b/drivers/xen/xenbus/xenbus_client.c
+index e17ca8156171..a38292ef79f6 100644
+--- a/drivers/xen/xenbus/xenbus_client.c
++++ b/drivers/xen/xenbus/xenbus_client.c
+@@ -448,7 +448,14 @@ EXPORT_SYMBOL_GPL(xenbus_free_evtchn);
+ int xenbus_map_ring_valloc(struct xenbus_device *dev, grant_ref_t *gnt_refs,
+ 			   unsigned int nr_grefs, void **vaddr)
+ {
+-	return ring_ops->map(dev, gnt_refs, nr_grefs, vaddr);
++	int err;
++
++	err = ring_ops->map(dev, gnt_refs, nr_grefs, vaddr);
++	/* Some hypervisors are buggy and can return 1. */
++	if (err > 0)
++		err = GNTST_general_error;
++
++	return err;
+ }
+ EXPORT_SYMBOL_GPL(xenbus_map_ring_valloc);
+ 
+diff --git a/fs/afs/cmservice.c b/fs/afs/cmservice.c
+index 6765949b3aab..380ad5ace7cf 100644
+--- a/fs/afs/cmservice.c
++++ b/fs/afs/cmservice.c
+@@ -169,7 +169,7 @@ static int afs_record_cm_probe(struct afs_call *call, struct afs_server *server)
+ 
+ 	spin_lock(&server->probe_lock);
+ 
+-	if (!test_bit(AFS_SERVER_FL_HAVE_EPOCH, &server->flags)) {
++	if (!test_and_set_bit(AFS_SERVER_FL_HAVE_EPOCH, &server->flags)) {
+ 		server->cm_epoch = call->epoch;
+ 		server->probe.cm_epoch = call->epoch;
+ 		goto out;
+diff --git a/fs/afs/internal.h b/fs/afs/internal.h
+index ef732dd4e7ef..15ae9c7f9c00 100644
+--- a/fs/afs/internal.h
++++ b/fs/afs/internal.h
+@@ -1335,7 +1335,7 @@ extern struct afs_volume *afs_create_volume(struct afs_fs_context *);
+ extern void afs_activate_volume(struct afs_volume *);
+ extern void afs_deactivate_volume(struct afs_volume *);
+ extern void afs_put_volume(struct afs_cell *, struct afs_volume *);
+-extern int afs_check_volume_status(struct afs_volume *, struct key *);
++extern int afs_check_volume_status(struct afs_volume *, struct afs_fs_cursor *);
+ 
+ /*
+  * write.c
+diff --git a/fs/afs/rotate.c b/fs/afs/rotate.c
+index 172ba569cd60..2a3305e42b14 100644
+--- a/fs/afs/rotate.c
++++ b/fs/afs/rotate.c
+@@ -192,7 +192,7 @@ bool afs_select_fileserver(struct afs_fs_cursor *fc)
+ 			write_unlock(&vnode->volume->servers_lock);
+ 
+ 			set_bit(AFS_VOLUME_NEEDS_UPDATE, &vnode->volume->flags);
+-			error = afs_check_volume_status(vnode->volume, fc->key);
++			error = afs_check_volume_status(vnode->volume, fc);
+ 			if (error < 0)
+ 				goto failed_set_error;
+ 
+@@ -281,7 +281,7 @@ bool afs_select_fileserver(struct afs_fs_cursor *fc)
+ 
+ 			set_bit(AFS_VOLUME_WAIT, &vnode->volume->flags);
+ 			set_bit(AFS_VOLUME_NEEDS_UPDATE, &vnode->volume->flags);
+-			error = afs_check_volume_status(vnode->volume, fc->key);
++			error = afs_check_volume_status(vnode->volume, fc);
+ 			if (error < 0)
+ 				goto failed_set_error;
+ 
+@@ -341,7 +341,7 @@ start:
+ 	/* See if we need to do an update of the volume record.  Note that the
+ 	 * volume may have moved or even have been deleted.
+ 	 */
+-	error = afs_check_volume_status(vnode->volume, fc->key);
++	error = afs_check_volume_status(vnode->volume, fc);
+ 	if (error < 0)
+ 		goto failed_set_error;
+ 
+diff --git a/fs/afs/server.c b/fs/afs/server.c
+index b7f3cb2130ca..11b90ac7ea30 100644
+--- a/fs/afs/server.c
++++ b/fs/afs/server.c
+@@ -594,12 +594,9 @@ retry:
+ 	}
+ 
+ 	ret = wait_on_bit(&server->flags, AFS_SERVER_FL_UPDATING,
+-			  TASK_INTERRUPTIBLE);
++			  (fc->flags & AFS_FS_CURSOR_INTR) ?
++			  TASK_INTERRUPTIBLE : TASK_UNINTERRUPTIBLE);
+ 	if (ret == -ERESTARTSYS) {
+-		if (!(fc->flags & AFS_FS_CURSOR_INTR) && server->addresses) {
+-			_leave(" = t [intr]");
+-			return true;
+-		}
+ 		fc->error = ret;
+ 		_leave(" = f [intr]");
+ 		return false;
+diff --git a/fs/afs/volume.c b/fs/afs/volume.c
+index 92ca5e27573b..4310336b9bb8 100644
+--- a/fs/afs/volume.c
++++ b/fs/afs/volume.c
+@@ -281,7 +281,7 @@ error:
+ /*
+  * Make sure the volume record is up to date.
+  */
+-int afs_check_volume_status(struct afs_volume *volume, struct key *key)
++int afs_check_volume_status(struct afs_volume *volume, struct afs_fs_cursor *fc)
+ {
+ 	time64_t now = ktime_get_real_seconds();
+ 	int ret, retries = 0;
+@@ -299,7 +299,7 @@ retry:
+ 	}
+ 
+ 	if (!test_and_set_bit_lock(AFS_VOLUME_UPDATING, &volume->flags)) {
+-		ret = afs_update_volume_status(volume, key);
++		ret = afs_update_volume_status(volume, fc->key);
+ 		clear_bit_unlock(AFS_VOLUME_WAIT, &volume->flags);
+ 		clear_bit_unlock(AFS_VOLUME_UPDATING, &volume->flags);
+ 		wake_up_bit(&volume->flags, AFS_VOLUME_WAIT);
+@@ -312,7 +312,9 @@ retry:
+ 		return 0;
+ 	}
+ 
+-	ret = wait_on_bit(&volume->flags, AFS_VOLUME_WAIT, TASK_INTERRUPTIBLE);
++	ret = wait_on_bit(&volume->flags, AFS_VOLUME_WAIT,
++			  (fc->flags & AFS_FS_CURSOR_INTR) ?
++			  TASK_INTERRUPTIBLE : TASK_UNINTERRUPTIBLE);
+ 	if (ret == -ERESTARTSYS) {
+ 		_leave(" = %d", ret);
+ 		return ret;
+diff --git a/fs/afs/yfsclient.c b/fs/afs/yfsclient.c
+index 83b6d67325f6..b5b45c57e1b1 100644
+--- a/fs/afs/yfsclient.c
++++ b/fs/afs/yfsclient.c
+@@ -165,15 +165,15 @@ static void xdr_dump_bad(const __be32 *bp)
+ 	int i;
+ 
+ 	pr_notice("YFS XDR: Bad status record\n");
+-	for (i = 0; i < 5 * 4 * 4; i += 16) {
++	for (i = 0; i < 6 * 4 * 4; i += 16) {
+ 		memcpy(x, bp, 16);
+ 		bp += 4;
+ 		pr_notice("%03x: %08x %08x %08x %08x\n",
+ 			  i, ntohl(x[0]), ntohl(x[1]), ntohl(x[2]), ntohl(x[3]));
+ 	}
+ 
+-	memcpy(x, bp, 4);
+-	pr_notice("0x50: %08x\n", ntohl(x[0]));
++	memcpy(x, bp, 8);
++	pr_notice("0x60: %08x %08x\n", ntohl(x[0]), ntohl(x[1]));
+ }
+ 
+ /*
+diff --git a/fs/ext4/ialloc.c b/fs/ext4/ialloc.c
+index f95ee99091e4..eab18b7b56e7 100644
+--- a/fs/ext4/ialloc.c
++++ b/fs/ext4/ialloc.c
+@@ -663,7 +663,7 @@ static int find_group_other(struct super_block *sb, struct inode *parent,
+  * block has been written back to disk.  (Yes, these values are
+  * somewhat arbitrary...)
+  */
+-#define RECENTCY_MIN	5
++#define RECENTCY_MIN	60
+ #define RECENTCY_DIRTY	300
+ 
+ static int recently_deleted(struct super_block *sb, ext4_group_t group, int ino)
+diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
+index 37f65ad0d823..4d3c81fd0902 100644
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -1974,7 +1974,7 @@ static int ext4_writepage(struct page *page,
+ 	bool keep_towrite = false;
+ 
+ 	if (unlikely(ext4_forced_shutdown(EXT4_SB(inode->i_sb)))) {
+-		ext4_invalidatepage(page, 0, PAGE_SIZE);
++		inode->i_mapping->a_ops->invalidatepage(page, 0, PAGE_SIZE);
+ 		unlock_page(page);
+ 		return -EIO;
+ 	}
+diff --git a/fs/ext4/mballoc.c b/fs/ext4/mballoc.c
+index 51a78eb65f3c..2f7aebee1a7b 100644
+--- a/fs/ext4/mballoc.c
++++ b/fs/ext4/mballoc.c
+@@ -1936,7 +1936,8 @@ void ext4_mb_complex_scan_group(struct ext4_allocation_context *ac,
+ 	int free;
+ 
+ 	free = e4b->bd_info->bb_free;
+-	BUG_ON(free <= 0);
++	if (WARN_ON(free <= 0))
++		return;
+ 
+ 	i = e4b->bd_info->bb_first_free;
+ 
+@@ -1959,7 +1960,8 @@ void ext4_mb_complex_scan_group(struct ext4_allocation_context *ac,
+ 		}
+ 
+ 		mb_find_extent(e4b, i, ac->ac_g_ex.fe_len, &ex);
+-		BUG_ON(ex.fe_len <= 0);
++		if (WARN_ON(ex.fe_len <= 0))
++			break;
+ 		if (free < ex.fe_len) {
+ 			ext4_grp_locked_error(sb, e4b->bd_group, 0, 0,
+ 					"%d free clusters as per "
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index 16da3b3481a4..446158ab507d 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -3610,7 +3610,8 @@ int ext4_calculate_overhead(struct super_block *sb)
+ 	 */
+ 	if (sbi->s_journal && !sbi->journal_bdev)
+ 		overhead += EXT4_NUM_B2C(sbi, sbi->s_journal->j_maxlen);
+-	else if (ext4_has_feature_journal(sb) && !sbi->s_journal) {
++	else if (ext4_has_feature_journal(sb) && !sbi->s_journal && j_inum) {
++		/* j_inum for internal journal is non-zero */
+ 		j_inode = ext4_get_journal_inode(sb, j_inum);
+ 		if (j_inode) {
+ 			j_blocks = j_inode->i_size >> sb->s_blocksize_bits;
+diff --git a/fs/nfsd/nfs4state.c b/fs/nfsd/nfs4state.c
+index 65cfe9ab47be..de9fbe7ed06c 100644
+--- a/fs/nfsd/nfs4state.c
++++ b/fs/nfsd/nfs4state.c
+@@ -267,6 +267,8 @@ find_or_allocate_block(struct nfs4_lockowner *lo, struct knfsd_fh *fh,
+ 	if (!nbl) {
+ 		nbl= kmalloc(sizeof(*nbl), GFP_KERNEL);
+ 		if (nbl) {
++			INIT_LIST_HEAD(&nbl->nbl_list);
++			INIT_LIST_HEAD(&nbl->nbl_lru);
+ 			fh_copy_shallow(&nbl->nbl_fh, fh);
+ 			locks_init_lock(&nbl->nbl_lock);
+ 			nfsd4_init_cb(&nbl->nbl_cb, lo->lo_owner.so_client,
+diff --git a/fs/pnode.c b/fs/pnode.c
+index 49f6d7ff2139..1106137c747a 100644
+--- a/fs/pnode.c
++++ b/fs/pnode.c
+@@ -261,14 +261,13 @@ static int propagate_one(struct mount *m)
+ 	child = copy_tree(last_source, last_source->mnt.mnt_root, type);
+ 	if (IS_ERR(child))
+ 		return PTR_ERR(child);
++	read_seqlock_excl(&mount_lock);
+ 	mnt_set_mountpoint(m, mp, child);
++	if (m->mnt_master != dest_master)
++		SET_MNT_MARK(m->mnt_master);
++	read_sequnlock_excl(&mount_lock);
+ 	last_dest = m;
+ 	last_source = child;
+-	if (m->mnt_master != dest_master) {
+-		read_seqlock_excl(&mount_lock);
+-		SET_MNT_MARK(m->mnt_master);
+-		read_sequnlock_excl(&mount_lock);
+-	}
+ 	hlist_add_head(&child->mnt_hash, list);
+ 	return count_mounts(m->mnt_ns, child);
+ }
+diff --git a/fs/ubifs/orphan.c b/fs/ubifs/orphan.c
+index edf43ddd7dce..7dd740e3692d 100644
+--- a/fs/ubifs/orphan.c
++++ b/fs/ubifs/orphan.c
+@@ -688,14 +688,14 @@ static int do_kill_orphans(struct ubifs_info *c, struct ubifs_scan_leb *sleb,
+ 
+ 			ino_key_init(c, &key1, inum);
+ 			err = ubifs_tnc_lookup(c, &key1, ino);
+-			if (err)
++			if (err && err != -ENOENT)
+ 				goto out_free;
+ 
+ 			/*
+ 			 * Check whether an inode can really get deleted.
+ 			 * linkat() with O_TMPFILE allows rebirth of an inode.
+ 			 */
+-			if (ino->nlink == 0) {
++			if (err == 0 && ino->nlink == 0) {
+ 				dbg_rcvry("deleting orphaned inode %lu",
+ 					  (unsigned long)inum);
+ 
+diff --git a/fs/xfs/xfs_icache.c b/fs/xfs/xfs_icache.c
+index 8dc2e5414276..00932d2b503b 100644
+--- a/fs/xfs/xfs_icache.c
++++ b/fs/xfs/xfs_icache.c
+@@ -907,7 +907,12 @@ xfs_eofblocks_worker(
+ {
+ 	struct xfs_mount *mp = container_of(to_delayed_work(work),
+ 				struct xfs_mount, m_eofblocks_work);
++
++	if (!sb_start_write_trylock(mp->m_super))
++		return;
+ 	xfs_icache_free_eofblocks(mp, NULL);
++	sb_end_write(mp->m_super);
++
+ 	xfs_queue_eofblocks(mp);
+ }
+ 
+@@ -934,7 +939,12 @@ xfs_cowblocks_worker(
+ {
+ 	struct xfs_mount *mp = container_of(to_delayed_work(work),
+ 				struct xfs_mount, m_cowblocks_work);
++
++	if (!sb_start_write_trylock(mp->m_super))
++		return;
+ 	xfs_icache_free_cowblocks(mp, NULL);
++	sb_end_write(mp->m_super);
++
+ 	xfs_queue_cowblocks(mp);
+ }
+ 
+diff --git a/fs/xfs/xfs_ioctl.c b/fs/xfs/xfs_ioctl.c
+index d42de92cb283..4a99e0b0f333 100644
+--- a/fs/xfs/xfs_ioctl.c
++++ b/fs/xfs/xfs_ioctl.c
+@@ -2264,7 +2264,10 @@ xfs_file_ioctl(
+ 		if (error)
+ 			return error;
+ 
+-		return xfs_icache_free_eofblocks(mp, &keofb);
++		sb_start_write(mp->m_super);
++		error = xfs_icache_free_eofblocks(mp, &keofb);
++		sb_end_write(mp->m_super);
++		return error;
+ 	}
+ 
+ 	default:
+diff --git a/fs/xfs/xfs_reflink.c b/fs/xfs/xfs_reflink.c
+index b0ce04ffd3cd..107bf2a2f344 100644
+--- a/fs/xfs/xfs_reflink.c
++++ b/fs/xfs/xfs_reflink.c
+@@ -1051,6 +1051,7 @@ xfs_reflink_remap_extent(
+ 		uirec.br_startblock = irec->br_startblock + rlen;
+ 		uirec.br_startoff = irec->br_startoff + rlen;
+ 		uirec.br_blockcount = unmap_len - rlen;
++		uirec.br_state = irec->br_state;
+ 		unmap_len = rlen;
+ 
+ 		/* If this isn't a real mapping, we're done. */
+diff --git a/fs/xfs/xfs_trans_ail.c b/fs/xfs/xfs_trans_ail.c
+index 00cc5b8734be..3bc570c90ad9 100644
+--- a/fs/xfs/xfs_trans_ail.c
++++ b/fs/xfs/xfs_trans_ail.c
+@@ -529,8 +529,9 @@ xfsaild(
+ {
+ 	struct xfs_ail	*ailp = data;
+ 	long		tout = 0;	/* milliseconds */
++	unsigned int	noreclaim_flag;
+ 
+-	current->flags |= PF_MEMALLOC;
++	noreclaim_flag = memalloc_noreclaim_save();
+ 	set_freezable();
+ 
+ 	while (1) {
+@@ -601,6 +602,7 @@ xfsaild(
+ 		tout = xfsaild_push(ailp);
+ 	}
+ 
++	memalloc_noreclaim_restore(noreclaim_flag);
+ 	return 0;
+ }
+ 
+diff --git a/include/linux/irqchip/arm-gic-v3.h b/include/linux/irqchip/arm-gic-v3.h
+index 83439bfb6c5b..7613a84a2466 100644
+--- a/include/linux/irqchip/arm-gic-v3.h
++++ b/include/linux/irqchip/arm-gic-v3.h
+@@ -241,6 +241,7 @@
+ 
+ #define GICR_TYPER_PLPIS		(1U << 0)
+ #define GICR_TYPER_VLPIS		(1U << 1)
++#define GICR_TYPER_DIRTY		(1U << 2)
+ #define GICR_TYPER_DirectLPIS		(1U << 3)
+ #define GICR_TYPER_LAST			(1U << 4)
+ #define GICR_TYPER_RVPEID		(1U << 7)
+@@ -665,6 +666,7 @@ struct rdists {
+ 	bool			has_vlpis;
+ 	bool			has_rvpeid;
+ 	bool			has_direct_lpi;
++	bool			has_vpend_valid_dirty;
+ };
+ 
+ struct irq_domain;
+diff --git a/include/linux/pci_ids.h b/include/linux/pci_ids.h
+index 352c0d708720..6693cf561cd1 100644
+--- a/include/linux/pci_ids.h
++++ b/include/linux/pci_ids.h
+@@ -2583,6 +2583,8 @@
+ 
+ #define PCI_VENDOR_ID_AMAZON		0x1d0f
+ 
++#define PCI_VENDOR_ID_ZHAOXIN		0x1d17
++
+ #define PCI_VENDOR_ID_HYGON		0x1d94
+ 
+ #define PCI_VENDOR_ID_HXT		0x1dbf
+diff --git a/include/linux/printk.h b/include/linux/printk.h
+index 1e6108b8d15f..e061635e0409 100644
+--- a/include/linux/printk.h
++++ b/include/linux/printk.h
+@@ -202,7 +202,6 @@ __printf(1, 2) void dump_stack_set_arch_desc(const char *fmt, ...);
+ void dump_stack_print_info(const char *log_lvl);
+ void show_regs_print_info(const char *log_lvl);
+ extern asmlinkage void dump_stack(void) __cold;
+-extern void printk_safe_init(void);
+ extern void printk_safe_flush(void);
+ extern void printk_safe_flush_on_panic(void);
+ #else
+@@ -269,10 +268,6 @@ static inline void dump_stack(void)
+ {
+ }
+ 
+-static inline void printk_safe_init(void)
+-{
+-}
+-
+ static inline void printk_safe_flush(void)
+ {
+ }
+diff --git a/include/linux/qed/qed_chain.h b/include/linux/qed/qed_chain.h
+index 2dd0a9ed5b36..733fad7dfbed 100644
+--- a/include/linux/qed/qed_chain.h
++++ b/include/linux/qed/qed_chain.h
+@@ -97,6 +97,11 @@ struct qed_chain_u32 {
+ 	u32 cons_idx;
+ };
+ 
++struct addr_tbl_entry {
++	void *virt_addr;
++	dma_addr_t dma_map;
++};
++
+ struct qed_chain {
+ 	/* fastpath portion of the chain - required for commands such
+ 	 * as produce / consume.
+@@ -107,10 +112,11 @@ struct qed_chain {
+ 
+ 	/* Fastpath portions of the PBL [if exists] */
+ 	struct {
+-		/* Table for keeping the virtual addresses of the chain pages,
+-		 * respectively to the physical addresses in the pbl table.
++		/* Table for keeping the virtual and physical addresses of the
++		 * chain pages, respectively to the physical addresses
++		 * in the pbl table.
+ 		 */
+-		void **pp_virt_addr_tbl;
++		struct addr_tbl_entry *pp_addr_tbl;
+ 
+ 		union {
+ 			struct qed_chain_pbl_u16 u16;
+@@ -287,7 +293,7 @@ qed_chain_advance_page(struct qed_chain *p_chain,
+ 				*(u32 *)page_to_inc = 0;
+ 			page_index = *(u32 *)page_to_inc;
+ 		}
+-		*p_next_elem = p_chain->pbl.pp_virt_addr_tbl[page_index];
++		*p_next_elem = p_chain->pbl.pp_addr_tbl[page_index].virt_addr;
+ 	}
+ }
+ 
+@@ -537,7 +543,7 @@ static inline void qed_chain_init_params(struct qed_chain *p_chain,
+ 
+ 	p_chain->pbl_sp.p_phys_table = 0;
+ 	p_chain->pbl_sp.p_virt_table = NULL;
+-	p_chain->pbl.pp_virt_addr_tbl = NULL;
++	p_chain->pbl.pp_addr_tbl = NULL;
+ }
+ 
+ /**
+@@ -575,11 +581,11 @@ static inline void qed_chain_init_mem(struct qed_chain *p_chain,
+ static inline void qed_chain_init_pbl_mem(struct qed_chain *p_chain,
+ 					  void *p_virt_pbl,
+ 					  dma_addr_t p_phys_pbl,
+-					  void **pp_virt_addr_tbl)
++					  struct addr_tbl_entry *pp_addr_tbl)
+ {
+ 	p_chain->pbl_sp.p_phys_table = p_phys_pbl;
+ 	p_chain->pbl_sp.p_virt_table = p_virt_pbl;
+-	p_chain->pbl.pp_virt_addr_tbl = pp_virt_addr_tbl;
++	p_chain->pbl.pp_addr_tbl = pp_addr_tbl;
+ }
+ 
+ /**
+@@ -644,7 +650,7 @@ static inline void *qed_chain_get_last_elem(struct qed_chain *p_chain)
+ 		break;
+ 	case QED_CHAIN_MODE_PBL:
+ 		last_page_idx = p_chain->page_cnt - 1;
+-		p_virt_addr = p_chain->pbl.pp_virt_addr_tbl[last_page_idx];
++		p_virt_addr = p_chain->pbl.pp_addr_tbl[last_page_idx].virt_addr;
+ 		break;
+ 	}
+ 	/* p_virt_addr points at this stage to the last page of the chain */
+@@ -716,7 +722,7 @@ static inline void qed_chain_pbl_zero_mem(struct qed_chain *p_chain)
+ 	page_cnt = qed_chain_get_page_cnt(p_chain);
+ 
+ 	for (i = 0; i < page_cnt; i++)
+-		memset(p_chain->pbl.pp_virt_addr_tbl[i], 0,
++		memset(p_chain->pbl.pp_addr_tbl[i].virt_addr, 0,
+ 		       QED_CHAIN_PAGE_SIZE);
+ }
+ 
+diff --git a/include/linux/sunrpc/svc_rdma.h b/include/linux/sunrpc/svc_rdma.h
+index 40f65888dd38..fddad9f5b390 100644
+--- a/include/linux/sunrpc/svc_rdma.h
++++ b/include/linux/sunrpc/svc_rdma.h
+@@ -162,6 +162,7 @@ extern bool svc_rdma_post_recvs(struct svcxprt_rdma *rdma);
+ extern void svc_rdma_recv_ctxt_put(struct svcxprt_rdma *rdma,
+ 				   struct svc_rdma_recv_ctxt *ctxt);
+ extern void svc_rdma_flush_recv_queues(struct svcxprt_rdma *rdma);
++extern void svc_rdma_release_rqst(struct svc_rqst *rqstp);
+ extern int svc_rdma_recvfrom(struct svc_rqst *);
+ 
+ /* svc_rdma_rw.c */
+diff --git a/include/sound/soc.h b/include/sound/soc.h
+index 8a2266676b2d..efb8bad7b0fa 100644
+--- a/include/sound/soc.h
++++ b/include/sound/soc.h
+@@ -1058,6 +1058,7 @@ struct snd_soc_card {
+ 	const struct snd_soc_dapm_route *of_dapm_routes;
+ 	int num_of_dapm_routes;
+ 	bool fully_routed;
++	bool disable_route_checks;
+ 
+ 	/* lists of probed devices belonging to this card */
+ 	struct list_head component_dev_list;
+diff --git a/include/trace/events/iocost.h b/include/trace/events/iocost.h
+index 7ecaa65b7106..c2f580fd371b 100644
+--- a/include/trace/events/iocost.h
++++ b/include/trace/events/iocost.h
+@@ -130,7 +130,7 @@ DEFINE_EVENT(iocg_inuse_update, iocost_inuse_reset,
+ 
+ TRACE_EVENT(iocost_ioc_vrate_adj,
+ 
+-	TP_PROTO(struct ioc *ioc, u64 new_vrate, u32 (*missed_ppm)[2],
++	TP_PROTO(struct ioc *ioc, u64 new_vrate, u32 *missed_ppm,
+ 		u32 rq_wait_pct, int nr_lagging, int nr_shortages,
+ 		int nr_surpluses),
+ 
+@@ -155,8 +155,8 @@ TRACE_EVENT(iocost_ioc_vrate_adj,
+ 		__entry->old_vrate = atomic64_read(&ioc->vtime_rate);;
+ 		__entry->new_vrate = new_vrate;
+ 		__entry->busy_level = ioc->busy_level;
+-		__entry->read_missed_ppm = (*missed_ppm)[READ];
+-		__entry->write_missed_ppm = (*missed_ppm)[WRITE];
++		__entry->read_missed_ppm = missed_ppm[READ];
++		__entry->write_missed_ppm = missed_ppm[WRITE];
+ 		__entry->rq_wait_pct = rq_wait_pct;
+ 		__entry->nr_lagging = nr_lagging;
+ 		__entry->nr_shortages = nr_shortages;
+diff --git a/include/trace/events/rpcrdma.h b/include/trace/events/rpcrdma.h
+index c0e4c93324f5..fa14adf24235 100644
+--- a/include/trace/events/rpcrdma.h
++++ b/include/trace/events/rpcrdma.h
+@@ -1699,17 +1699,15 @@ DECLARE_EVENT_CLASS(svcrdma_sendcomp_event,
+ 
+ TRACE_EVENT(svcrdma_post_send,
+ 	TP_PROTO(
+-		const struct ib_send_wr *wr,
+-		int status
++		const struct ib_send_wr *wr
+ 	),
+ 
+-	TP_ARGS(wr, status),
++	TP_ARGS(wr),
+ 
+ 	TP_STRUCT__entry(
+ 		__field(const void *, cqe)
+ 		__field(unsigned int, num_sge)
+ 		__field(u32, inv_rkey)
+-		__field(int, status)
+ 	),
+ 
+ 	TP_fast_assign(
+@@ -1717,12 +1715,11 @@ TRACE_EVENT(svcrdma_post_send,
+ 		__entry->num_sge = wr->num_sge;
+ 		__entry->inv_rkey = (wr->opcode == IB_WR_SEND_WITH_INV) ?
+ 					wr->ex.invalidate_rkey : 0;
+-		__entry->status = status;
+ 	),
+ 
+-	TP_printk("cqe=%p num_sge=%u inv_rkey=0x%08x status=%d",
++	TP_printk("cqe=%p num_sge=%u inv_rkey=0x%08x",
+ 		__entry->cqe, __entry->num_sge,
+-		__entry->inv_rkey, __entry->status
++		__entry->inv_rkey
+ 	)
+ );
+ 
+@@ -1787,26 +1784,23 @@ TRACE_EVENT(svcrdma_wc_receive,
+ TRACE_EVENT(svcrdma_post_rw,
+ 	TP_PROTO(
+ 		const void *cqe,
+-		int sqecount,
+-		int status
++		int sqecount
+ 	),
+ 
+-	TP_ARGS(cqe, sqecount, status),
++	TP_ARGS(cqe, sqecount),
+ 
+ 	TP_STRUCT__entry(
+ 		__field(const void *, cqe)
+ 		__field(int, sqecount)
+-		__field(int, status)
+ 	),
+ 
+ 	TP_fast_assign(
+ 		__entry->cqe = cqe;
+ 		__entry->sqecount = sqecount;
+-		__entry->status = status;
+ 	),
+ 
+-	TP_printk("cqe=%p sqecount=%d status=%d",
+-		__entry->cqe, __entry->sqecount, __entry->status
++	TP_printk("cqe=%p sqecount=%d",
++		__entry->cqe, __entry->sqecount
+ 	)
+ );
+ 
+@@ -1902,6 +1896,34 @@ DECLARE_EVENT_CLASS(svcrdma_sendqueue_event,
+ DEFINE_SQ_EVENT(full);
+ DEFINE_SQ_EVENT(retry);
+ 
++TRACE_EVENT(svcrdma_sq_post_err,
++	TP_PROTO(
++		const struct svcxprt_rdma *rdma,
++		int status
++	),
++
++	TP_ARGS(rdma, status),
++
++	TP_STRUCT__entry(
++		__field(int, avail)
++		__field(int, depth)
++		__field(int, status)
++		__string(addr, rdma->sc_xprt.xpt_remotebuf)
++	),
++
++	TP_fast_assign(
++		__entry->avail = atomic_read(&rdma->sc_sq_avail);
++		__entry->depth = rdma->sc_sq_depth;
++		__entry->status = status;
++		__assign_str(addr, rdma->sc_xprt.xpt_remotebuf);
++	),
++
++	TP_printk("addr=%s sc_sq_avail=%d/%d status=%d",
++		__get_str(addr), __entry->avail, __entry->depth,
++		__entry->status
++	)
++);
++
+ #endif /* _TRACE_RPCRDMA_H */
+ 
+ #include <trace/define_trace.h>
+diff --git a/include/uapi/linux/netfilter/nf_tables.h b/include/uapi/linux/netfilter/nf_tables.h
+index 065218a20bb7..bbd4b42b76c7 100644
+--- a/include/uapi/linux/netfilter/nf_tables.h
++++ b/include/uapi/linux/netfilter/nf_tables.h
+@@ -276,6 +276,7 @@ enum nft_rule_compat_attributes {
+  * @NFT_SET_TIMEOUT: set uses timeouts
+  * @NFT_SET_EVAL: set can be updated from the evaluation path
+  * @NFT_SET_OBJECT: set contains stateful objects
++ * @NFT_SET_CONCAT: set contains a concatenation
+  */
+ enum nft_set_flags {
+ 	NFT_SET_ANONYMOUS		= 0x1,
+@@ -285,6 +286,7 @@ enum nft_set_flags {
+ 	NFT_SET_TIMEOUT			= 0x10,
+ 	NFT_SET_EVAL			= 0x20,
+ 	NFT_SET_OBJECT			= 0x40,
++	NFT_SET_CONCAT			= 0x80,
+ };
+ 
+ /**
+diff --git a/include/uapi/linux/pkt_sched.h b/include/uapi/linux/pkt_sched.h
+index bbe791b24168..0e43f674a686 100644
+--- a/include/uapi/linux/pkt_sched.h
++++ b/include/uapi/linux/pkt_sched.h
+@@ -1197,8 +1197,8 @@ enum {
+  *       [TCA_TAPRIO_ATTR_SCHED_ENTRY_INTERVAL]
+  */
+ 
+-#define TCA_TAPRIO_ATTR_FLAG_TXTIME_ASSIST	BIT(0)
+-#define TCA_TAPRIO_ATTR_FLAG_FULL_OFFLOAD	BIT(1)
++#define TCA_TAPRIO_ATTR_FLAG_TXTIME_ASSIST	_BITUL(0)
++#define TCA_TAPRIO_ATTR_FLAG_FULL_OFFLOAD	_BITUL(1)
+ 
+ enum {
+ 	TCA_TAPRIO_ATTR_UNSPEC,
+diff --git a/init/main.c b/init/main.c
+index ee4947af823f..9c7948b3763a 100644
+--- a/init/main.c
++++ b/init/main.c
+@@ -907,7 +907,6 @@ asmlinkage __visible void __init start_kernel(void)
+ 	boot_init_stack_canary();
+ 
+ 	time_init();
+-	printk_safe_init();
+ 	perf_event_init();
+ 	profile_init();
+ 	call_function_init();
+diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c
+index 70f71b154fa5..3fe0b006d2d2 100644
+--- a/kernel/bpf/cpumap.c
++++ b/kernel/bpf/cpumap.c
+@@ -469,7 +469,7 @@ static int cpu_map_update_elem(struct bpf_map *map, void *key, void *value,
+ 		return -EOVERFLOW;
+ 
+ 	/* Make sure CPU is a valid possible cpu */
+-	if (!cpu_possible(key_cpu))
++	if (key_cpu >= nr_cpumask_bits || !cpu_possible(key_cpu))
+ 		return -ENODEV;
+ 
+ 	if (qsize == 0) {
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index e5d12c54b552..1c53ccbd5b5d 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -1918,6 +1918,15 @@ static bool register_is_const(struct bpf_reg_state *reg)
+ 	return reg->type == SCALAR_VALUE && tnum_is_const(reg->var_off);
+ }
+ 
++static bool __is_pointer_value(bool allow_ptr_leaks,
++			       const struct bpf_reg_state *reg)
++{
++	if (allow_ptr_leaks)
++		return false;
++
++	return reg->type != SCALAR_VALUE;
++}
++
+ static void save_register_state(struct bpf_func_state *state,
+ 				int spi, struct bpf_reg_state *reg)
+ {
+@@ -2108,6 +2117,16 @@ static int check_stack_read(struct bpf_verifier_env *env,
+ 			 * which resets stack/reg liveness for state transitions
+ 			 */
+ 			state->regs[value_regno].live |= REG_LIVE_WRITTEN;
++		} else if (__is_pointer_value(env->allow_ptr_leaks, reg)) {
++			/* If value_regno==-1, the caller is asking us whether
++			 * it is acceptable to use this value as a SCALAR_VALUE
++			 * (e.g. for XADD).
++			 * We must not allow unprivileged callers to do that
++			 * with spilled pointers.
++			 */
++			verbose(env, "leaking pointer from stack off %d\n",
++				off);
++			return -EACCES;
+ 		}
+ 		mark_reg_read(env, reg, reg->parent, REG_LIVE_READ64);
+ 	} else {
+@@ -2473,15 +2492,6 @@ static int check_sock_access(struct bpf_verifier_env *env, int insn_idx,
+ 	return -EACCES;
+ }
+ 
+-static bool __is_pointer_value(bool allow_ptr_leaks,
+-			       const struct bpf_reg_state *reg)
+-{
+-	if (allow_ptr_leaks)
+-		return false;
+-
+-	return reg->type != SCALAR_VALUE;
+-}
+-
+ static struct bpf_reg_state *reg_state(struct bpf_verifier_env *env, int regno)
+ {
+ 	return cur_regs(env) + regno;
+@@ -2875,7 +2885,7 @@ static int check_ptr_to_btf_access(struct bpf_verifier_env *env,
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	if (atype == BPF_READ) {
++	if (atype == BPF_READ && value_regno >= 0) {
+ 		if (ret == SCALAR_VALUE) {
+ 			mark_reg_unknown(env, regs, value_regno);
+ 			return 0;
+@@ -9882,6 +9892,7 @@ static int check_attach_btf_id(struct bpf_verifier_env *env)
+ 				return -EINVAL;
+ 			}
+ 			env->ops = bpf_verifier_ops[tgt_prog->type];
++			prog->expected_attach_type = tgt_prog->expected_attach_type;
+ 		}
+ 		if (!tgt_prog->jited) {
+ 			verbose(env, "Can attach to only JITed progs\n");
+@@ -10215,6 +10226,13 @@ err_release_maps:
+ 		 * them now. Otherwise free_used_maps() will release them.
+ 		 */
+ 		release_maps(env);
++
++	/* extension progs temporarily inherit the attach_type of their targets
++	   for verification purposes, so set it back to zero before returning
++	 */
++	if (env->prog->type == BPF_PROG_TYPE_EXT)
++		env->prog->expected_attach_type = 0;
++
+ 	*prog = env->prog;
+ err_unlock:
+ 	if (!is_priv)
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 533c19348189..29ace472f916 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -7278,10 +7278,17 @@ static void perf_event_task_output(struct perf_event *event,
+ 		goto out;
+ 
+ 	task_event->event_id.pid = perf_event_pid(event, task);
+-	task_event->event_id.ppid = perf_event_pid(event, current);
+-
+ 	task_event->event_id.tid = perf_event_tid(event, task);
+-	task_event->event_id.ptid = perf_event_tid(event, current);
++
++	if (task_event->event_id.header.type == PERF_RECORD_EXIT) {
++		task_event->event_id.ppid = perf_event_pid(event,
++							task->real_parent);
++		task_event->event_id.ptid = perf_event_pid(event,
++							task->real_parent);
++	} else {  /* PERF_RECORD_FORK */
++		task_event->event_id.ppid = perf_event_pid(event, current);
++		task_event->event_id.ptid = perf_event_tid(event, current);
++	}
+ 
+ 	task_event->event_id.time = perf_event_clock(event);
+ 
+diff --git a/kernel/printk/internal.h b/kernel/printk/internal.h
+index c8e6ab689d42..b2b0f526f249 100644
+--- a/kernel/printk/internal.h
++++ b/kernel/printk/internal.h
+@@ -23,6 +23,9 @@ __printf(1, 0) int vprintk_func(const char *fmt, va_list args);
+ void __printk_safe_enter(void);
+ void __printk_safe_exit(void);
+ 
++void printk_safe_init(void);
++bool printk_percpu_data_ready(void);
++
+ #define printk_safe_enter_irqsave(flags)	\
+ 	do {					\
+ 		local_irq_save(flags);		\
+@@ -64,4 +67,6 @@ __printf(1, 0) int vprintk_func(const char *fmt, va_list args) { return 0; }
+ #define printk_safe_enter_irq() local_irq_disable()
+ #define printk_safe_exit_irq() local_irq_enable()
+ 
++static inline void printk_safe_init(void) { }
++static inline bool printk_percpu_data_ready(void) { return false; }
+ #endif /* CONFIG_PRINTK */
+diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
+index fada22dc4ab6..74fbd76cf664 100644
+--- a/kernel/printk/printk.c
++++ b/kernel/printk/printk.c
+@@ -460,6 +460,18 @@ static char __log_buf[__LOG_BUF_LEN] __aligned(LOG_ALIGN);
+ static char *log_buf = __log_buf;
+ static u32 log_buf_len = __LOG_BUF_LEN;
+ 
++/*
++ * We cannot access per-CPU data (e.g. per-CPU flush irq_work) before
++ * per_cpu_areas are initialised. This variable is set to true when
++ * it's safe to access per-CPU data.
++ */
++static bool __printk_percpu_data_ready __read_mostly;
++
++bool printk_percpu_data_ready(void)
++{
++	return __printk_percpu_data_ready;
++}
++
+ /* Return log buffer address */
+ char *log_buf_addr_get(void)
+ {
+@@ -1146,12 +1158,28 @@ static void __init log_buf_add_cpu(void)
+ static inline void log_buf_add_cpu(void) {}
+ #endif /* CONFIG_SMP */
+ 
++static void __init set_percpu_data_ready(void)
++{
++	printk_safe_init();
++	/* Make sure we set this flag only after printk_safe() init is done */
++	barrier();
++	__printk_percpu_data_ready = true;
++}
++
+ void __init setup_log_buf(int early)
+ {
+ 	unsigned long flags;
+ 	char *new_log_buf;
+ 	unsigned int free;
+ 
++	/*
++	 * Some archs call setup_log_buf() multiple times - first is very
++	 * early, e.g. from setup_arch(), and second - when percpu_areas
++	 * are initialised.
++	 */
++	if (!early)
++		set_percpu_data_ready();
++
+ 	if (log_buf != __log_buf)
+ 		return;
+ 
+@@ -2966,6 +2994,9 @@ static DEFINE_PER_CPU(struct irq_work, wake_up_klogd_work) = {
+ 
+ void wake_up_klogd(void)
+ {
++	if (!printk_percpu_data_ready())
++		return;
++
+ 	preempt_disable();
+ 	if (waitqueue_active(&log_wait)) {
+ 		this_cpu_or(printk_pending, PRINTK_PENDING_WAKEUP);
+@@ -2976,6 +3007,9 @@ void wake_up_klogd(void)
+ 
+ void defer_console_output(void)
+ {
++	if (!printk_percpu_data_ready())
++		return;
++
+ 	preempt_disable();
+ 	__this_cpu_or(printk_pending, PRINTK_PENDING_OUTPUT);
+ 	irq_work_queue(this_cpu_ptr(&wake_up_klogd_work));
+diff --git a/kernel/printk/printk_safe.c b/kernel/printk/printk_safe.c
+index b4045e782743..d9a659a686f3 100644
+--- a/kernel/printk/printk_safe.c
++++ b/kernel/printk/printk_safe.c
+@@ -27,7 +27,6 @@
+  * There are situations when we want to make sure that all buffers
+  * were handled or when IRQs are blocked.
+  */
+-static int printk_safe_irq_ready __read_mostly;
+ 
+ #define SAFE_LOG_BUF_LEN ((1 << CONFIG_PRINTK_SAFE_LOG_BUF_SHIFT) -	\
+ 				sizeof(atomic_t) -			\
+@@ -51,7 +50,7 @@ static DEFINE_PER_CPU(struct printk_safe_seq_buf, nmi_print_seq);
+ /* Get flushed in a more safe context. */
+ static void queue_flush_work(struct printk_safe_seq_buf *s)
+ {
+-	if (printk_safe_irq_ready)
++	if (printk_percpu_data_ready())
+ 		irq_work_queue(&s->work);
+ }
+ 
+@@ -402,14 +401,6 @@ void __init printk_safe_init(void)
+ #endif
+ 	}
+ 
+-	/*
+-	 * In the highly unlikely event that a NMI were to trigger at
+-	 * this moment. Make sure IRQ work is set up before this
+-	 * variable is set.
+-	 */
+-	barrier();
+-	printk_safe_irq_ready = 1;
+-
+ 	/* Flush pending messages that did not have scheduled IRQ works. */
+ 	printk_safe_flush();
+ }
+diff --git a/kernel/sched/core.c b/kernel/sched/core.c
+index da8a19470218..3dd675697301 100644
+--- a/kernel/sched/core.c
++++ b/kernel/sched/core.c
+@@ -1239,13 +1239,8 @@ static void uclamp_fork(struct task_struct *p)
+ 		return;
+ 
+ 	for_each_clamp_id(clamp_id) {
+-		unsigned int clamp_value = uclamp_none(clamp_id);
+-
+-		/* By default, RT tasks always get 100% boost */
+-		if (unlikely(rt_task(p) && clamp_id == UCLAMP_MIN))
+-			clamp_value = uclamp_none(UCLAMP_MAX);
+-
+-		uclamp_se_set(&p->uclamp_req[clamp_id], clamp_value, false);
++		uclamp_se_set(&p->uclamp_req[clamp_id],
++			      uclamp_none(clamp_id), false);
+ 	}
+ }
+ 
+diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
+index dac9104d126f..ff9435dee1df 100644
+--- a/kernel/sched/cputime.c
++++ b/kernel/sched/cputime.c
+@@ -1003,12 +1003,12 @@ u64 kcpustat_field(struct kernel_cpustat *kcpustat,
+ 		   enum cpu_usage_stat usage, int cpu)
+ {
+ 	u64 *cpustat = kcpustat->cpustat;
++	u64 val = cpustat[usage];
+ 	struct rq *rq;
+-	u64 val;
+ 	int err;
+ 
+ 	if (!vtime_accounting_enabled_cpu(cpu))
+-		return cpustat[usage];
++		return val;
+ 
+ 	rq = cpu_rq(cpu);
+ 
+diff --git a/kernel/sched/isolation.c b/kernel/sched/isolation.c
+index 008d6ac2342b..808244f3ddd9 100644
+--- a/kernel/sched/isolation.c
++++ b/kernel/sched/isolation.c
+@@ -149,6 +149,9 @@ __setup("nohz_full=", housekeeping_nohz_full_setup);
+ static int __init housekeeping_isolcpus_setup(char *str)
+ {
+ 	unsigned int flags = 0;
++	bool illegal = false;
++	char *par;
++	int len;
+ 
+ 	while (isalpha(*str)) {
+ 		if (!strncmp(str, "nohz,", 5)) {
+@@ -169,8 +172,22 @@ static int __init housekeeping_isolcpus_setup(char *str)
+ 			continue;
+ 		}
+ 
+-		pr_warn("isolcpus: Error, unknown flag\n");
+-		return 0;
++		/*
++		 * Skip unknown sub-parameter and validate that it is not
++		 * containing an invalid character.
++		 */
++		for (par = str, len = 0; *str && *str != ','; str++, len++) {
++			if (!isalpha(*str) && *str != '_')
++				illegal = true;
++		}
++
++		if (illegal) {
++			pr_warn("isolcpus: Invalid flag %.*s\n", len, par);
++			return 0;
++		}
++
++		pr_info("isolcpus: Skipped unknown flag %.*s\n", len, par);
++		str++;
+ 	}
+ 
+ 	/* Default behaviour for isolcpus without flags */
+diff --git a/kernel/signal.c b/kernel/signal.c
+index 7938c60e11dd..9abf962bbde4 100644
+--- a/kernel/signal.c
++++ b/kernel/signal.c
+@@ -1510,15 +1510,15 @@ int kill_pid_usb_asyncio(int sig, int errno, sigval_t addr,
+ 	unsigned long flags;
+ 	int ret = -EINVAL;
+ 
++	if (!valid_signal(sig))
++		return ret;
++
+ 	clear_siginfo(&info);
+ 	info.si_signo = sig;
+ 	info.si_errno = errno;
+ 	info.si_code = SI_ASYNCIO;
+ 	*((sigval_t *)&info.si_pid) = addr;
+ 
+-	if (!valid_signal(sig))
+-		return ret;
+-
+ 	rcu_read_lock();
+ 	p = pid_task(pid, PIDTYPE_PID);
+ 	if (!p) {
+diff --git a/mm/shmem.c b/mm/shmem.c
+index aad3ba74b0e9..7406f91f8a52 100644
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -2404,11 +2404,11 @@ static int shmem_mfill_atomic_pte(struct mm_struct *dst_mm,
+ 
+ 	lru_cache_add_anon(page);
+ 
+-	spin_lock(&info->lock);
++	spin_lock_irq(&info->lock);
+ 	info->alloced++;
+ 	inode->i_blocks += BLOCKS_PER_PAGE;
+ 	shmem_recalc_inode(inode);
+-	spin_unlock(&info->lock);
++	spin_unlock_irq(&info->lock);
+ 
+ 	inc_mm_counter(dst_mm, mm_counter_file(page));
+ 	page_add_file_rmap(page, false);
+diff --git a/net/core/datagram.c b/net/core/datagram.c
+index a78e7f864c1e..56f0ccf677a5 100644
+--- a/net/core/datagram.c
++++ b/net/core/datagram.c
+@@ -51,6 +51,7 @@
+ #include <linux/slab.h>
+ #include <linux/pagemap.h>
+ #include <linux/uio.h>
++#include <linux/indirect_call_wrapper.h>
+ 
+ #include <net/protocol.h>
+ #include <linux/skbuff.h>
+@@ -414,6 +415,11 @@ int skb_kill_datagram(struct sock *sk, struct sk_buff *skb, unsigned int flags)
+ }
+ EXPORT_SYMBOL(skb_kill_datagram);
+ 
++INDIRECT_CALLABLE_DECLARE(static size_t simple_copy_to_iter(const void *addr,
++						size_t bytes,
++						void *data __always_unused,
++						struct iov_iter *i));
++
+ static int __skb_datagram_iter(const struct sk_buff *skb, int offset,
+ 			       struct iov_iter *to, int len, bool fault_short,
+ 			       size_t (*cb)(const void *, size_t, void *,
+@@ -427,7 +433,8 @@ static int __skb_datagram_iter(const struct sk_buff *skb, int offset,
+ 	if (copy > 0) {
+ 		if (copy > len)
+ 			copy = len;
+-		n = cb(skb->data + offset, copy, data, to);
++		n = INDIRECT_CALL_1(cb, simple_copy_to_iter,
++				    skb->data + offset, copy, data, to);
+ 		offset += n;
+ 		if (n != copy)
+ 			goto short_copy;
+@@ -449,8 +456,9 @@ static int __skb_datagram_iter(const struct sk_buff *skb, int offset,
+ 
+ 			if (copy > len)
+ 				copy = len;
+-			n = cb(vaddr + skb_frag_off(frag) + offset - start,
+-			       copy, data, to);
++			n = INDIRECT_CALL_1(cb, simple_copy_to_iter,
++					vaddr + skb_frag_off(frag) + offset - start,
++					copy, data, to);
+ 			kunmap(page);
+ 			offset += n;
+ 			if (n != copy)
+diff --git a/net/mac80211/mesh.c b/net/mac80211/mesh.c
+index d09b3c789314..36978a0e5000 100644
+--- a/net/mac80211/mesh.c
++++ b/net/mac80211/mesh.c
+@@ -1257,15 +1257,15 @@ static void ieee80211_mesh_rx_bcn_presp(struct ieee80211_sub_if_data *sdata,
+ 		    sdata->u.mesh.mshcfg.rssi_threshold < rx_status->signal)
+ 			mesh_neighbour_update(sdata, mgmt->sa, &elems,
+ 					      rx_status);
++
++		if (ifmsh->csa_role != IEEE80211_MESH_CSA_ROLE_INIT &&
++		    !sdata->vif.csa_active)
++			ieee80211_mesh_process_chnswitch(sdata, &elems, true);
+ 	}
+ 
+ 	if (ifmsh->sync_ops)
+ 		ifmsh->sync_ops->rx_bcn_presp(sdata,
+ 			stype, mgmt, &elems, rx_status);
+-
+-	if (ifmsh->csa_role != IEEE80211_MESH_CSA_ROLE_INIT &&
+-	    !sdata->vif.csa_active)
+-		ieee80211_mesh_process_chnswitch(sdata, &elems, true);
+ }
+ 
+ int ieee80211_mesh_finish_csa(struct ieee80211_sub_if_data *sdata)
+@@ -1373,6 +1373,9 @@ static void mesh_rx_csa_frame(struct ieee80211_sub_if_data *sdata,
+ 	ieee802_11_parse_elems(pos, len - baselen, true, &elems,
+ 			       mgmt->bssid, NULL);
+ 
++	if (!mesh_matches_local(sdata, &elems))
++		return;
++
+ 	ifmsh->chsw_ttl = elems.mesh_chansw_params_ie->mesh_ttl;
+ 	if (!--ifmsh->chsw_ttl)
+ 		fwd_csa = false;
+diff --git a/net/netfilter/nf_nat_proto.c b/net/netfilter/nf_nat_proto.c
+index 64eedc17037a..3d816a1e5442 100644
+--- a/net/netfilter/nf_nat_proto.c
++++ b/net/netfilter/nf_nat_proto.c
+@@ -1035,8 +1035,8 @@ int nf_nat_inet_register_fn(struct net *net, const struct nf_hook_ops *ops)
+ 	ret = nf_nat_register_fn(net, NFPROTO_IPV4, ops, nf_nat_ipv4_ops,
+ 				 ARRAY_SIZE(nf_nat_ipv4_ops));
+ 	if (ret)
+-		nf_nat_ipv6_unregister_fn(net, ops);
+-
++		nf_nat_unregister_fn(net, NFPROTO_IPV6, ops,
++					ARRAY_SIZE(nf_nat_ipv6_ops));
+ 	return ret;
+ }
+ EXPORT_SYMBOL_GPL(nf_nat_inet_register_fn);
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index 68ec31c4ae65..116178d373a1 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -3949,7 +3949,7 @@ static int nf_tables_newset(struct net *net, struct sock *nlsk,
+ 		if (flags & ~(NFT_SET_ANONYMOUS | NFT_SET_CONSTANT |
+ 			      NFT_SET_INTERVAL | NFT_SET_TIMEOUT |
+ 			      NFT_SET_MAP | NFT_SET_EVAL |
+-			      NFT_SET_OBJECT))
++			      NFT_SET_OBJECT | NFT_SET_CONCAT))
+ 			return -EOPNOTSUPP;
+ 		/* Only one of these operations is supported */
+ 		if ((flags & (NFT_SET_MAP | NFT_SET_OBJECT)) ==
+diff --git a/net/rxrpc/local_object.c b/net/rxrpc/local_object.c
+index a6c1349e965d..01135e54d95d 100644
+--- a/net/rxrpc/local_object.c
++++ b/net/rxrpc/local_object.c
+@@ -165,15 +165,6 @@ static int rxrpc_open_socket(struct rxrpc_local *local, struct net *net)
+ 			goto error;
+ 		}
+ 
+-		/* we want to set the don't fragment bit */
+-		opt = IPV6_PMTUDISC_DO;
+-		ret = kernel_setsockopt(local->socket, SOL_IPV6, IPV6_MTU_DISCOVER,
+-					(char *) &opt, sizeof(opt));
+-		if (ret < 0) {
+-			_debug("setsockopt failed");
+-			goto error;
+-		}
+-
+ 		/* Fall through and set IPv4 options too otherwise we don't get
+ 		 * errors from IPv4 packets sent through the IPv6 socket.
+ 		 */
+diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c
+index bad3d2420344..90e263c6aa69 100644
+--- a/net/rxrpc/output.c
++++ b/net/rxrpc/output.c
+@@ -474,41 +474,21 @@ send_fragmentable:
+ 	skb->tstamp = ktime_get_real();
+ 
+ 	switch (conn->params.local->srx.transport.family) {
++	case AF_INET6:
+ 	case AF_INET:
+ 		opt = IP_PMTUDISC_DONT;
+-		ret = kernel_setsockopt(conn->params.local->socket,
+-					SOL_IP, IP_MTU_DISCOVER,
+-					(char *)&opt, sizeof(opt));
+-		if (ret == 0) {
+-			ret = kernel_sendmsg(conn->params.local->socket, &msg,
+-					     iov, 2, len);
+-			conn->params.peer->last_tx_at = ktime_get_seconds();
+-
+-			opt = IP_PMTUDISC_DO;
+-			kernel_setsockopt(conn->params.local->socket, SOL_IP,
+-					  IP_MTU_DISCOVER,
+-					  (char *)&opt, sizeof(opt));
+-		}
+-		break;
+-
+-#ifdef CONFIG_AF_RXRPC_IPV6
+-	case AF_INET6:
+-		opt = IPV6_PMTUDISC_DONT;
+-		ret = kernel_setsockopt(conn->params.local->socket,
+-					SOL_IPV6, IPV6_MTU_DISCOVER,
+-					(char *)&opt, sizeof(opt));
+-		if (ret == 0) {
+-			ret = kernel_sendmsg(conn->params.local->socket, &msg,
+-					     iov, 2, len);
+-			conn->params.peer->last_tx_at = ktime_get_seconds();
+-
+-			opt = IPV6_PMTUDISC_DO;
+-			kernel_setsockopt(conn->params.local->socket,
+-					  SOL_IPV6, IPV6_MTU_DISCOVER,
+-					  (char *)&opt, sizeof(opt));
+-		}
++		kernel_setsockopt(conn->params.local->socket,
++				  SOL_IP, IP_MTU_DISCOVER,
++				  (char *)&opt, sizeof(opt));
++		ret = kernel_sendmsg(conn->params.local->socket, &msg,
++				     iov, 2, len);
++		conn->params.peer->last_tx_at = ktime_get_seconds();
++
++		opt = IP_PMTUDISC_DO;
++		kernel_setsockopt(conn->params.local->socket,
++				  SOL_IP, IP_MTU_DISCOVER,
++				  (char *)&opt, sizeof(opt));
+ 		break;
+-#endif
+ 
+ 	default:
+ 		BUG();
+diff --git a/net/sunrpc/svc_xprt.c b/net/sunrpc/svc_xprt.c
+index 298557744818..dc74519286be 100644
+--- a/net/sunrpc/svc_xprt.c
++++ b/net/sunrpc/svc_xprt.c
+@@ -897,9 +897,6 @@ int svc_send(struct svc_rqst *rqstp)
+ 	if (!xprt)
+ 		goto out;
+ 
+-	/* release the receive skb before sending the reply */
+-	xprt->xpt_ops->xpo_release_rqst(rqstp);
+-
+ 	/* calculate over-all length */
+ 	xb = &rqstp->rq_res;
+ 	xb->len = xb->head[0].iov_len +
+diff --git a/net/sunrpc/svcsock.c b/net/sunrpc/svcsock.c
+index 2934dd711715..4260924ad9db 100644
+--- a/net/sunrpc/svcsock.c
++++ b/net/sunrpc/svcsock.c
+@@ -605,6 +605,8 @@ svc_udp_sendto(struct svc_rqst *rqstp)
+ {
+ 	int		error;
+ 
++	svc_release_udp_skb(rqstp);
++
+ 	error = svc_sendto(rqstp, &rqstp->rq_res);
+ 	if (error == -ECONNREFUSED)
+ 		/* ICMP error on earlier request. */
+@@ -1137,6 +1139,8 @@ static int svc_tcp_sendto(struct svc_rqst *rqstp)
+ 	int sent;
+ 	__be32 reclen;
+ 
++	svc_release_skb(rqstp);
++
+ 	/* Set up the first element of the reply kvec.
+ 	 * Any other kvecs that may be in use have been taken
+ 	 * care of by the server implementation itself.
+diff --git a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
+index 96bccd398469..b8ee91ffedda 100644
+--- a/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
++++ b/net/sunrpc/xprtrdma/svc_rdma_recvfrom.c
+@@ -222,6 +222,26 @@ void svc_rdma_recv_ctxt_put(struct svcxprt_rdma *rdma,
+ 		svc_rdma_recv_ctxt_destroy(rdma, ctxt);
+ }
+ 
++/**
++ * svc_rdma_release_rqst - Release transport-specific per-rqst resources
++ * @rqstp: svc_rqst being released
++ *
++ * Ensure that the recv_ctxt is released whether or not a Reply
++ * was sent. For example, the client could close the connection,
++ * or svc_process could drop an RPC, before the Reply is sent.
++ */
++void svc_rdma_release_rqst(struct svc_rqst *rqstp)
++{
++	struct svc_rdma_recv_ctxt *ctxt = rqstp->rq_xprt_ctxt;
++	struct svc_xprt *xprt = rqstp->rq_xprt;
++	struct svcxprt_rdma *rdma =
++		container_of(xprt, struct svcxprt_rdma, sc_xprt);
++
++	rqstp->rq_xprt_ctxt = NULL;
++	if (ctxt)
++		svc_rdma_recv_ctxt_put(rdma, ctxt);
++}
++
+ static int __svc_rdma_post_recv(struct svcxprt_rdma *rdma,
+ 				struct svc_rdma_recv_ctxt *ctxt)
+ {
+@@ -756,6 +776,8 @@ int svc_rdma_recvfrom(struct svc_rqst *rqstp)
+ 	__be32 *p;
+ 	int ret;
+ 
++	rqstp->rq_xprt_ctxt = NULL;
++
+ 	spin_lock(&rdma_xprt->sc_rq_dto_lock);
+ 	ctxt = svc_rdma_next_recv_ctxt(&rdma_xprt->sc_read_complete_q);
+ 	if (ctxt) {
+diff --git a/net/sunrpc/xprtrdma/svc_rdma_rw.c b/net/sunrpc/xprtrdma/svc_rdma_rw.c
+index 48fe3b16b0d9..a59912e2666d 100644
+--- a/net/sunrpc/xprtrdma/svc_rdma_rw.c
++++ b/net/sunrpc/xprtrdma/svc_rdma_rw.c
+@@ -323,8 +323,6 @@ static int svc_rdma_post_chunk_ctxt(struct svc_rdma_chunk_ctxt *cc)
+ 		if (atomic_sub_return(cc->cc_sqecount,
+ 				      &rdma->sc_sq_avail) > 0) {
+ 			ret = ib_post_send(rdma->sc_qp, first_wr, &bad_wr);
+-			trace_svcrdma_post_rw(&cc->cc_cqe,
+-					      cc->cc_sqecount, ret);
+ 			if (ret)
+ 				break;
+ 			return 0;
+@@ -337,6 +335,7 @@ static int svc_rdma_post_chunk_ctxt(struct svc_rdma_chunk_ctxt *cc)
+ 		trace_svcrdma_sq_retry(rdma);
+ 	} while (1);
+ 
++	trace_svcrdma_sq_post_err(rdma, ret);
+ 	set_bit(XPT_CLOSE, &xprt->xpt_flags);
+ 
+ 	/* If even one was posted, there will be a completion. */
+diff --git a/net/sunrpc/xprtrdma/svc_rdma_sendto.c b/net/sunrpc/xprtrdma/svc_rdma_sendto.c
+index f3f108090aa4..9f234d1f3b3d 100644
+--- a/net/sunrpc/xprtrdma/svc_rdma_sendto.c
++++ b/net/sunrpc/xprtrdma/svc_rdma_sendto.c
+@@ -310,15 +310,17 @@ int svc_rdma_send(struct svcxprt_rdma *rdma, struct ib_send_wr *wr)
+ 		}
+ 
+ 		svc_xprt_get(&rdma->sc_xprt);
++		trace_svcrdma_post_send(wr);
+ 		ret = ib_post_send(rdma->sc_qp, wr, NULL);
+-		trace_svcrdma_post_send(wr, ret);
+-		if (ret) {
+-			set_bit(XPT_CLOSE, &rdma->sc_xprt.xpt_flags);
+-			svc_xprt_put(&rdma->sc_xprt);
+-			wake_up(&rdma->sc_send_wait);
+-		}
+-		break;
++		if (ret)
++			break;
++		return 0;
+ 	}
++
++	trace_svcrdma_sq_post_err(rdma, ret);
++	set_bit(XPT_CLOSE, &rdma->sc_xprt.xpt_flags);
++	svc_xprt_put(&rdma->sc_xprt);
++	wake_up(&rdma->sc_send_wait);
+ 	return ret;
+ }
+ 
+@@ -875,12 +877,7 @@ int svc_rdma_sendto(struct svc_rqst *rqstp)
+ 				      wr_lst, rp_ch);
+ 	if (ret < 0)
+ 		goto err1;
+-	ret = 0;
+-
+-out:
+-	rqstp->rq_xprt_ctxt = NULL;
+-	svc_rdma_recv_ctxt_put(rdma, rctxt);
+-	return ret;
++	return 0;
+ 
+  err2:
+ 	if (ret != -E2BIG && ret != -EINVAL)
+@@ -889,14 +886,12 @@ out:
+ 	ret = svc_rdma_send_error_msg(rdma, sctxt, rqstp);
+ 	if (ret < 0)
+ 		goto err1;
+-	ret = 0;
+-	goto out;
++	return 0;
+ 
+  err1:
+ 	svc_rdma_send_ctxt_put(rdma, sctxt);
+  err0:
+ 	trace_svcrdma_send_failed(rqstp, ret);
+ 	set_bit(XPT_CLOSE, &xprt->xpt_flags);
+-	ret = -ENOTCONN;
+-	goto out;
++	return -ENOTCONN;
+ }
+diff --git a/net/sunrpc/xprtrdma/svc_rdma_transport.c b/net/sunrpc/xprtrdma/svc_rdma_transport.c
+index 145a3615c319..889220f11a70 100644
+--- a/net/sunrpc/xprtrdma/svc_rdma_transport.c
++++ b/net/sunrpc/xprtrdma/svc_rdma_transport.c
+@@ -71,7 +71,6 @@ static struct svc_xprt *svc_rdma_create(struct svc_serv *serv,
+ 					struct sockaddr *sa, int salen,
+ 					int flags);
+ static struct svc_xprt *svc_rdma_accept(struct svc_xprt *xprt);
+-static void svc_rdma_release_rqst(struct svc_rqst *);
+ static void svc_rdma_detach(struct svc_xprt *xprt);
+ static void svc_rdma_free(struct svc_xprt *xprt);
+ static int svc_rdma_has_wspace(struct svc_xprt *xprt);
+@@ -558,10 +557,6 @@ static struct svc_xprt *svc_rdma_accept(struct svc_xprt *xprt)
+ 	return NULL;
+ }
+ 
+-static void svc_rdma_release_rqst(struct svc_rqst *rqstp)
+-{
+-}
+-
+ /*
+  * When connected, an svc_xprt has at least two references:
+  *
+diff --git a/net/tipc/link.c b/net/tipc/link.c
+index 467c53a1fb5c..d4675e922a8f 100644
+--- a/net/tipc/link.c
++++ b/net/tipc/link.c
+@@ -1065,7 +1065,7 @@ static void tipc_link_update_cwin(struct tipc_link *l, int released,
+ 	/* Enter fast recovery */
+ 	if (unlikely(retransmitted)) {
+ 		l->ssthresh = max_t(u16, l->window / 2, 300);
+-		l->window = l->ssthresh;
++		l->window = min_t(u16, l->ssthresh, l->window);
+ 		return;
+ 	}
+ 	/* Enter slow start */
+diff --git a/net/tipc/msg.h b/net/tipc/msg.h
+index 6d466ebdb64f..871feadbbc19 100644
+--- a/net/tipc/msg.h
++++ b/net/tipc/msg.h
+@@ -394,6 +394,11 @@ static inline u32 msg_connected(struct tipc_msg *m)
+ 	return msg_type(m) == TIPC_CONN_MSG;
+ }
+ 
++static inline u32 msg_direct(struct tipc_msg *m)
++{
++	return msg_type(m) == TIPC_DIRECT_MSG;
++}
++
+ static inline u32 msg_errcode(struct tipc_msg *m)
+ {
+ 	return msg_bits(m, 1, 25, 0xf);
+diff --git a/net/tipc/node.c b/net/tipc/node.c
+index d50be9a3d479..803a3a6d0f50 100644
+--- a/net/tipc/node.c
++++ b/net/tipc/node.c
+@@ -1586,7 +1586,8 @@ static void tipc_lxc_xmit(struct net *peer_net, struct sk_buff_head *list)
+ 	case TIPC_MEDIUM_IMPORTANCE:
+ 	case TIPC_HIGH_IMPORTANCE:
+ 	case TIPC_CRITICAL_IMPORTANCE:
+-		if (msg_connected(hdr) || msg_named(hdr)) {
++		if (msg_connected(hdr) || msg_named(hdr) ||
++		    msg_direct(hdr)) {
+ 			tipc_loopback_trace(peer_net, list);
+ 			spin_lock_init(&list->lock);
+ 			tipc_sk_rcv(peer_net, list);
+diff --git a/net/tipc/socket.c b/net/tipc/socket.c
+index 693e8902161e..87466607097f 100644
+--- a/net/tipc/socket.c
++++ b/net/tipc/socket.c
+@@ -1461,7 +1461,7 @@ static int __tipc_sendmsg(struct socket *sock, struct msghdr *m, size_t dlen)
+ 	}
+ 
+ 	__skb_queue_head_init(&pkts);
+-	mtu = tipc_node_get_mtu(net, dnode, tsk->portid, false);
++	mtu = tipc_node_get_mtu(net, dnode, tsk->portid, true);
+ 	rc = tipc_msg_build(hdr, m, 0, dlen, mtu, &pkts);
+ 	if (unlikely(rc != dlen))
+ 		return rc;
+diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
+index 752ff0a225a9..f24ff5a903ae 100644
+--- a/scripts/Makefile.lib
++++ b/scripts/Makefile.lib
+@@ -308,7 +308,7 @@ define rule_dtc
+ endef
+ 
+ $(obj)/%.dt.yaml: $(src)/%.dts $(DTC) $(DT_TMP_SCHEMA) FORCE
+-	$(call if_changed_rule,dtc)
++	$(call if_changed_rule,dtc,yaml)
+ 
+ dtc-tmp = $(subst $(comma),_,$(dot-target).dts.tmp)
+ 
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index af21e9583c0d..59b60b1f26f8 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -1203,10 +1203,8 @@ static void azx_vs_set_state(struct pci_dev *pci,
+ 		if (!disabled) {
+ 			dev_info(chip->card->dev,
+ 				 "Start delayed initialization\n");
+-			if (azx_probe_continue(chip) < 0) {
++			if (azx_probe_continue(chip) < 0)
+ 				dev_err(chip->card->dev, "initialization error\n");
+-				hda->init_failed = true;
+-			}
+ 		}
+ 	} else {
+ 		dev_info(chip->card->dev, "%s via vga_switcheroo\n",
+@@ -1339,12 +1337,15 @@ static int register_vga_switcheroo(struct azx *chip)
+ /*
+  * destructor
+  */
+-static int azx_free(struct azx *chip)
++static void azx_free(struct azx *chip)
+ {
+ 	struct pci_dev *pci = chip->pci;
+ 	struct hda_intel *hda = container_of(chip, struct hda_intel, chip);
+ 	struct hdac_bus *bus = azx_bus(chip);
+ 
++	if (hda->freed)
++		return;
++
+ 	if (azx_has_pm_runtime(chip) && chip->running)
+ 		pm_runtime_get_noresume(&pci->dev);
+ 	chip->running = 0;
+@@ -1388,9 +1389,8 @@ static int azx_free(struct azx *chip)
+ 
+ 	if (chip->driver_caps & AZX_DCAPS_I915_COMPONENT)
+ 		snd_hdac_i915_exit(bus);
+-	kfree(hda);
+ 
+-	return 0;
++	hda->freed = 1;
+ }
+ 
+ static int azx_dev_disconnect(struct snd_device *device)
+@@ -1406,7 +1406,8 @@ static int azx_dev_disconnect(struct snd_device *device)
+ 
+ static int azx_dev_free(struct snd_device *device)
+ {
+-	return azx_free(device->device_data);
++	azx_free(device->device_data);
++	return 0;
+ }
+ 
+ #ifdef SUPPORT_VGA_SWITCHEROO
+@@ -1773,7 +1774,7 @@ static int azx_create(struct snd_card *card, struct pci_dev *pci,
+ 	if (err < 0)
+ 		return err;
+ 
+-	hda = kzalloc(sizeof(*hda), GFP_KERNEL);
++	hda = devm_kzalloc(&pci->dev, sizeof(*hda), GFP_KERNEL);
+ 	if (!hda) {
+ 		pci_disable_device(pci);
+ 		return -ENOMEM;
+@@ -1814,7 +1815,6 @@ static int azx_create(struct snd_card *card, struct pci_dev *pci,
+ 
+ 	err = azx_bus_init(chip, model[dev]);
+ 	if (err < 0) {
+-		kfree(hda);
+ 		pci_disable_device(pci);
+ 		return err;
+ 	}
+@@ -2009,7 +2009,7 @@ static int azx_first_init(struct azx *chip)
+ 	/* codec detection */
+ 	if (!azx_bus(chip)->codec_mask) {
+ 		dev_err(card->dev, "no codecs found!\n");
+-		return -ENODEV;
++		/* keep running the rest for the runtime PM */
+ 	}
+ 
+ 	if (azx_acquire_irq(chip, 0) < 0)
+@@ -2302,9 +2302,11 @@ static int azx_probe_continue(struct azx *chip)
+ #endif
+ 
+ 	/* create codec instances */
+-	err = azx_probe_codecs(chip, azx_max_codecs[chip->driver_type]);
+-	if (err < 0)
+-		goto out_free;
++	if (bus->codec_mask) {
++		err = azx_probe_codecs(chip, azx_max_codecs[chip->driver_type]);
++		if (err < 0)
++			goto out_free;
++	}
+ 
+ #ifdef CONFIG_SND_HDA_PATCH_LOADER
+ 	if (chip->fw) {
+@@ -2318,7 +2320,7 @@ static int azx_probe_continue(struct azx *chip)
+ #endif
+ 	}
+ #endif
+-	if ((probe_only[dev] & 1) == 0) {
++	if (bus->codec_mask && !(probe_only[dev] & 1)) {
+ 		err = azx_codec_configure(chip);
+ 		if (err < 0)
+ 			goto out_free;
+@@ -2335,17 +2337,23 @@ static int azx_probe_continue(struct azx *chip)
+ 
+ 	set_default_power_save(chip);
+ 
+-	if (azx_has_pm_runtime(chip))
++	if (azx_has_pm_runtime(chip)) {
++		pm_runtime_use_autosuspend(&pci->dev);
++		pm_runtime_allow(&pci->dev);
+ 		pm_runtime_put_autosuspend(&pci->dev);
++	}
+ 
+ out_free:
+-	if (err < 0 || !hda->need_i915_power)
++	if (err < 0) {
++		azx_free(chip);
++		return err;
++	}
++
++	if (!hda->need_i915_power)
+ 		display_power(chip, false);
+-	if (err < 0)
+-		hda->init_failed = 1;
+ 	complete_all(&hda->probe_wait);
+ 	to_hda_bus(bus)->bus_probing = 0;
+-	return err;
++	return 0;
+ }
+ 
+ static void azx_remove(struct pci_dev *pci)
+diff --git a/sound/pci/hda/hda_intel.h b/sound/pci/hda/hda_intel.h
+index 2acfff3da1a0..3fb119f09040 100644
+--- a/sound/pci/hda/hda_intel.h
++++ b/sound/pci/hda/hda_intel.h
+@@ -27,6 +27,7 @@ struct hda_intel {
+ 	unsigned int use_vga_switcheroo:1;
+ 	unsigned int vga_switcheroo_registered:1;
+ 	unsigned int init_failed:1; /* delayed init failed */
++	unsigned int freed:1; /* resources already released */
+ 
+ 	bool need_i915_power:1; /* the hda controller needs i915 power */
+ };
+diff --git a/sound/soc/codecs/tas571x.c b/sound/soc/codecs/tas571x.c
+index 1554631cb397..5b7f9fcf6cbf 100644
+--- a/sound/soc/codecs/tas571x.c
++++ b/sound/soc/codecs/tas571x.c
+@@ -820,8 +820,10 @@ static int tas571x_i2c_probe(struct i2c_client *client,
+ 
+ 	priv->regmap = devm_regmap_init(dev, NULL, client,
+ 					priv->chip->regmap_config);
+-	if (IS_ERR(priv->regmap))
+-		return PTR_ERR(priv->regmap);
++	if (IS_ERR(priv->regmap)) {
++		ret = PTR_ERR(priv->regmap);
++		goto disable_regs;
++	}
+ 
+ 	priv->pdn_gpio = devm_gpiod_get_optional(dev, "pdn", GPIOD_OUT_LOW);
+ 	if (IS_ERR(priv->pdn_gpio)) {
+@@ -845,7 +847,7 @@ static int tas571x_i2c_probe(struct i2c_client *client,
+ 
+ 	ret = regmap_write(priv->regmap, TAS571X_OSC_TRIM_REG, 0);
+ 	if (ret)
+-		return ret;
++		goto disable_regs;
+ 
+ 	usleep_range(50000, 60000);
+ 
+@@ -861,12 +863,20 @@ static int tas571x_i2c_probe(struct i2c_client *client,
+ 		 */
+ 		ret = regmap_update_bits(priv->regmap, TAS571X_MVOL_REG, 1, 0);
+ 		if (ret)
+-			return ret;
++			goto disable_regs;
+ 	}
+ 
+-	return devm_snd_soc_register_component(&client->dev,
++	ret = devm_snd_soc_register_component(&client->dev,
+ 				      &priv->component_driver,
+ 				      &tas571x_dai, 1);
++	if (ret)
++		goto disable_regs;
++
++	return ret;
++
++disable_regs:
++	regulator_bulk_disable(priv->chip->num_supply_names, priv->supplies);
++	return ret;
+ }
+ 
+ static int tas571x_i2c_remove(struct i2c_client *client)
+diff --git a/sound/soc/codecs/wm8960.c b/sound/soc/codecs/wm8960.c
+index 55112c1bba5e..6cf0f6612bda 100644
+--- a/sound/soc/codecs/wm8960.c
++++ b/sound/soc/codecs/wm8960.c
+@@ -860,8 +860,7 @@ static int wm8960_hw_params(struct snd_pcm_substream *substream,
+ 
+ 	wm8960->is_stream_in_use[tx] = true;
+ 
+-	if (snd_soc_component_get_bias_level(component) == SND_SOC_BIAS_ON &&
+-	    !wm8960->is_stream_in_use[!tx])
++	if (!wm8960->is_stream_in_use[!tx])
+ 		return wm8960_configure_clocking(component);
+ 
+ 	return 0;
+diff --git a/sound/soc/meson/axg-card.c b/sound/soc/meson/axg-card.c
+index 1f698adde506..2b04ac3d8fd3 100644
+--- a/sound/soc/meson/axg-card.c
++++ b/sound/soc/meson/axg-card.c
+@@ -586,8 +586,10 @@ static int axg_card_add_link(struct snd_soc_card *card, struct device_node *np,
+ 
+ 	if (axg_card_cpu_is_tdm_iface(dai_link->cpus->of_node))
+ 		ret = axg_card_parse_tdm(card, np, index);
+-	else if (axg_card_cpu_is_codec(dai_link->cpus->of_node))
++	else if (axg_card_cpu_is_codec(dai_link->cpus->of_node)) {
+ 		dai_link->params = &codec_params;
++		dai_link->no_pcm = 0; /* link is not a DPCM BE */
++	}
+ 
+ 	return ret;
+ }
+diff --git a/sound/soc/qcom/qdsp6/q6afe-dai.c b/sound/soc/qcom/qdsp6/q6afe-dai.c
+index c1a7624eaf17..2a5302f1db98 100644
+--- a/sound/soc/qcom/qdsp6/q6afe-dai.c
++++ b/sound/soc/qcom/qdsp6/q6afe-dai.c
+@@ -902,6 +902,8 @@ static struct snd_soc_dai_driver q6afe_dais[] = {
+ 				 SNDRV_PCM_RATE_16000,
+ 			.formats = SNDRV_PCM_FMTBIT_S16_LE |
+ 				   SNDRV_PCM_FMTBIT_S24_LE,
++			.channels_min = 1,
++			.channels_max = 8,
+ 			.rate_min =     8000,
+ 			.rate_max =     48000,
+ 		},
+@@ -917,6 +919,8 @@ static struct snd_soc_dai_driver q6afe_dais[] = {
+ 				 SNDRV_PCM_RATE_16000,
+ 			.formats = SNDRV_PCM_FMTBIT_S16_LE |
+ 				   SNDRV_PCM_FMTBIT_S24_LE,
++			.channels_min = 1,
++			.channels_max = 8,
+ 			.rate_min =     8000,
+ 			.rate_max =     48000,
+ 		},
+@@ -931,6 +935,8 @@ static struct snd_soc_dai_driver q6afe_dais[] = {
+ 			.rates = SNDRV_PCM_RATE_48000 | SNDRV_PCM_RATE_8000 |
+ 				 SNDRV_PCM_RATE_16000,
+ 			.formats = SNDRV_PCM_FMTBIT_S16_LE,
++			.channels_min = 1,
++			.channels_max = 8,
+ 			.rate_min =     8000,
+ 			.rate_max =     48000,
+ 		},
+@@ -946,6 +952,8 @@ static struct snd_soc_dai_driver q6afe_dais[] = {
+ 				 SNDRV_PCM_RATE_16000,
+ 			.formats = SNDRV_PCM_FMTBIT_S16_LE |
+ 				   SNDRV_PCM_FMTBIT_S24_LE,
++			.channels_min = 1,
++			.channels_max = 8,
+ 			.rate_min =     8000,
+ 			.rate_max =     48000,
+ 		},
+@@ -960,6 +968,8 @@ static struct snd_soc_dai_driver q6afe_dais[] = {
+ 			.rates = SNDRV_PCM_RATE_48000 | SNDRV_PCM_RATE_8000 |
+ 				 SNDRV_PCM_RATE_16000,
+ 			.formats = SNDRV_PCM_FMTBIT_S16_LE,
++			.channels_min = 1,
++			.channels_max = 8,
+ 			.rate_min =     8000,
+ 			.rate_max =     48000,
+ 		},
+@@ -975,6 +985,8 @@ static struct snd_soc_dai_driver q6afe_dais[] = {
+ 				 SNDRV_PCM_RATE_16000,
+ 			.formats = SNDRV_PCM_FMTBIT_S16_LE |
+ 				   SNDRV_PCM_FMTBIT_S24_LE,
++			.channels_min = 1,
++			.channels_max = 8,
+ 			.rate_min =     8000,
+ 			.rate_max =     48000,
+ 		},
+@@ -989,6 +1001,8 @@ static struct snd_soc_dai_driver q6afe_dais[] = {
+ 			.rates = SNDRV_PCM_RATE_48000 | SNDRV_PCM_RATE_8000 |
+ 				 SNDRV_PCM_RATE_16000,
+ 			.formats = SNDRV_PCM_FMTBIT_S16_LE,
++			.channels_min = 1,
++			.channels_max = 8,
+ 			.rate_min =     8000,
+ 			.rate_max =     48000,
+ 		},
+@@ -1004,6 +1018,8 @@ static struct snd_soc_dai_driver q6afe_dais[] = {
+ 				 SNDRV_PCM_RATE_16000,
+ 			.formats = SNDRV_PCM_FMTBIT_S16_LE |
+ 				   SNDRV_PCM_FMTBIT_S24_LE,
++			.channels_min = 1,
++			.channels_max = 8,
+ 			.rate_min =     8000,
+ 			.rate_max =     48000,
+ 		},
+diff --git a/sound/soc/samsung/s3c-i2s-v2.c b/sound/soc/samsung/s3c-i2s-v2.c
+index 593be1b668d6..b3e12d6a78a1 100644
+--- a/sound/soc/samsung/s3c-i2s-v2.c
++++ b/sound/soc/samsung/s3c-i2s-v2.c
+@@ -656,60 +656,6 @@ void s3c_i2sv2_cleanup(struct snd_soc_dai *dai,
+ }
+ EXPORT_SYMBOL_GPL(s3c_i2sv2_cleanup);
+ 
+-#ifdef CONFIG_PM
+-static int s3c2412_i2s_suspend(struct snd_soc_dai *dai)
+-{
+-	struct s3c_i2sv2_info *i2s = to_info(dai);
+-	u32 iismod;
+-
+-	if (dai->active) {
+-		i2s->suspend_iismod = readl(i2s->regs + S3C2412_IISMOD);
+-		i2s->suspend_iiscon = readl(i2s->regs + S3C2412_IISCON);
+-		i2s->suspend_iispsr = readl(i2s->regs + S3C2412_IISPSR);
+-
+-		/* some basic suspend checks */
+-
+-		iismod = readl(i2s->regs + S3C2412_IISMOD);
+-
+-		if (iismod & S3C2412_IISCON_RXDMA_ACTIVE)
+-			pr_warn("%s: RXDMA active?\n", __func__);
+-
+-		if (iismod & S3C2412_IISCON_TXDMA_ACTIVE)
+-			pr_warn("%s: TXDMA active?\n", __func__);
+-
+-		if (iismod & S3C2412_IISCON_IIS_ACTIVE)
+-			pr_warn("%s: IIS active\n", __func__);
+-	}
+-
+-	return 0;
+-}
+-
+-static int s3c2412_i2s_resume(struct snd_soc_dai *dai)
+-{
+-	struct s3c_i2sv2_info *i2s = to_info(dai);
+-
+-	pr_info("dai_active %d, IISMOD %08x, IISCON %08x\n",
+-		dai->active, i2s->suspend_iismod, i2s->suspend_iiscon);
+-
+-	if (dai->active) {
+-		writel(i2s->suspend_iiscon, i2s->regs + S3C2412_IISCON);
+-		writel(i2s->suspend_iismod, i2s->regs + S3C2412_IISMOD);
+-		writel(i2s->suspend_iispsr, i2s->regs + S3C2412_IISPSR);
+-
+-		writel(S3C2412_IISFIC_RXFLUSH | S3C2412_IISFIC_TXFLUSH,
+-		       i2s->regs + S3C2412_IISFIC);
+-
+-		ndelay(250);
+-		writel(0x0, i2s->regs + S3C2412_IISFIC);
+-	}
+-
+-	return 0;
+-}
+-#else
+-#define s3c2412_i2s_suspend NULL
+-#define s3c2412_i2s_resume  NULL
+-#endif
+-
+ int s3c_i2sv2_register_component(struct device *dev, int id,
+ 			   const struct snd_soc_component_driver *cmp_drv,
+ 			   struct snd_soc_dai_driver *dai_drv)
+@@ -727,9 +673,6 @@ int s3c_i2sv2_register_component(struct device *dev, int id,
+ 	if (!ops->delay)
+ 		ops->delay = s3c2412_i2s_delay;
+ 
+-	dai_drv->suspend = s3c2412_i2s_suspend;
+-	dai_drv->resume = s3c2412_i2s_resume;
+-
+ 	return devm_snd_soc_register_component(dev, cmp_drv, dai_drv, 1);
+ }
+ EXPORT_SYMBOL_GPL(s3c_i2sv2_register_component);
+diff --git a/sound/soc/samsung/s3c2412-i2s.c b/sound/soc/samsung/s3c2412-i2s.c
+index 787a3f6e9f24..b35d828c1cfe 100644
+--- a/sound/soc/samsung/s3c2412-i2s.c
++++ b/sound/soc/samsung/s3c2412-i2s.c
+@@ -117,6 +117,60 @@ static int s3c2412_i2s_hw_params(struct snd_pcm_substream *substream,
+ 	return 0;
+ }
+ 
++#ifdef CONFIG_PM
++static int s3c2412_i2s_suspend(struct snd_soc_component *component)
++{
++	struct s3c_i2sv2_info *i2s = snd_soc_component_get_drvdata(component);
++	u32 iismod;
++
++	if (component->active) {
++		i2s->suspend_iismod = readl(i2s->regs + S3C2412_IISMOD);
++		i2s->suspend_iiscon = readl(i2s->regs + S3C2412_IISCON);
++		i2s->suspend_iispsr = readl(i2s->regs + S3C2412_IISPSR);
++
++		/* some basic suspend checks */
++
++		iismod = readl(i2s->regs + S3C2412_IISMOD);
++
++		if (iismod & S3C2412_IISCON_RXDMA_ACTIVE)
++			pr_warn("%s: RXDMA active?\n", __func__);
++
++		if (iismod & S3C2412_IISCON_TXDMA_ACTIVE)
++			pr_warn("%s: TXDMA active?\n", __func__);
++
++		if (iismod & S3C2412_IISCON_IIS_ACTIVE)
++			pr_warn("%s: IIS active\n", __func__);
++	}
++
++	return 0;
++}
++
++static int s3c2412_i2s_resume(struct snd_soc_component *component)
++{
++	struct s3c_i2sv2_info *i2s = snd_soc_component_get_drvdata(component);
++
++	pr_info("component_active %d, IISMOD %08x, IISCON %08x\n",
++		component->active, i2s->suspend_iismod, i2s->suspend_iiscon);
++
++	if (component->active) {
++		writel(i2s->suspend_iiscon, i2s->regs + S3C2412_IISCON);
++		writel(i2s->suspend_iismod, i2s->regs + S3C2412_IISMOD);
++		writel(i2s->suspend_iispsr, i2s->regs + S3C2412_IISPSR);
++
++		writel(S3C2412_IISFIC_RXFLUSH | S3C2412_IISFIC_TXFLUSH,
++		       i2s->regs + S3C2412_IISFIC);
++
++		ndelay(250);
++		writel(0x0, i2s->regs + S3C2412_IISFIC);
++	}
++
++	return 0;
++}
++#else
++#define s3c2412_i2s_suspend NULL
++#define s3c2412_i2s_resume  NULL
++#endif
++
+ #define S3C2412_I2S_RATES \
+ 	(SNDRV_PCM_RATE_8000 | SNDRV_PCM_RATE_11025 | SNDRV_PCM_RATE_16000 | \
+ 	SNDRV_PCM_RATE_22050 | SNDRV_PCM_RATE_32000 | SNDRV_PCM_RATE_44100 | \
+@@ -146,6 +200,8 @@ static struct snd_soc_dai_driver s3c2412_i2s_dai = {
+ 
+ static const struct snd_soc_component_driver s3c2412_i2s_component = {
+ 	.name		= "s3c2412-i2s",
++	.suspend	= s3c2412_i2s_suspend,
++	.resume		= s3c2412_i2s_resume,
+ };
+ 
+ static int s3c2412_iis_dev_probe(struct platform_device *pdev)
+diff --git a/sound/soc/soc-core.c b/sound/soc/soc-core.c
+index 068d809c349a..b17366bac846 100644
+--- a/sound/soc/soc-core.c
++++ b/sound/soc/soc-core.c
+@@ -1256,8 +1256,18 @@ static int soc_probe_component(struct snd_soc_card *card,
+ 	ret = snd_soc_dapm_add_routes(dapm,
+ 				      component->driver->dapm_routes,
+ 				      component->driver->num_dapm_routes);
+-	if (ret < 0)
+-		goto err_probe;
++	if (ret < 0) {
++		if (card->disable_route_checks) {
++			dev_info(card->dev,
++				 "%s: disable_route_checks set, ignoring errors on add_routes\n",
++				 __func__);
++		} else {
++			dev_err(card->dev,
++				"%s: snd_soc_dapm_add_routes failed: %d\n",
++				__func__, ret);
++			goto err_probe;
++		}
++	}
+ 
+ 	/* see for_each_card_components */
+ 	list_add(&component->card_list, &card->component_dev_list);
+@@ -1938,8 +1948,18 @@ static int snd_soc_bind_card(struct snd_soc_card *card)
+ 
+ 	ret = snd_soc_dapm_add_routes(&card->dapm, card->dapm_routes,
+ 				      card->num_dapm_routes);
+-	if (ret < 0)
+-		goto probe_end;
++	if (ret < 0) {
++		if (card->disable_route_checks) {
++			dev_info(card->dev,
++				 "%s: disable_route_checks set, ignoring errors on add_routes\n",
++				 __func__);
++		} else {
++			dev_err(card->dev,
++				 "%s: snd_soc_dapm_add_routes failed: %d\n",
++				 __func__, ret);
++			goto probe_end;
++		}
++	}
+ 
+ 	ret = snd_soc_dapm_add_routes(&card->dapm, card->of_dapm_routes,
+ 				      card->num_of_dapm_routes);
+diff --git a/sound/soc/soc-pcm.c b/sound/soc/soc-pcm.c
+index 8f6f0ad50288..10e2305bb885 100644
+--- a/sound/soc/soc-pcm.c
++++ b/sound/soc/soc-pcm.c
+@@ -2890,22 +2890,19 @@ int soc_new_pcm(struct snd_soc_pcm_runtime *rtd, int num)
+ 		capture = rtd->dai_link->dpcm_capture;
+ 	} else {
+ 		/* Adapt stream for codec2codec links */
+-		struct snd_soc_pcm_stream *cpu_capture = rtd->dai_link->params ?
+-			&cpu_dai->driver->playback : &cpu_dai->driver->capture;
+-		struct snd_soc_pcm_stream *cpu_playback = rtd->dai_link->params ?
+-			&cpu_dai->driver->capture : &cpu_dai->driver->playback;
++		int cpu_capture = rtd->dai_link->params ?
++			SNDRV_PCM_STREAM_PLAYBACK : SNDRV_PCM_STREAM_CAPTURE;
++		int cpu_playback = rtd->dai_link->params ?
++			SNDRV_PCM_STREAM_CAPTURE : SNDRV_PCM_STREAM_PLAYBACK;
+ 
+ 		for_each_rtd_codec_dai(rtd, i, codec_dai) {
+ 			if (snd_soc_dai_stream_valid(codec_dai, SNDRV_PCM_STREAM_PLAYBACK) &&
+-			    snd_soc_dai_stream_valid(cpu_dai,   SNDRV_PCM_STREAM_CAPTURE))
++			    snd_soc_dai_stream_valid(cpu_dai,   cpu_playback))
+ 				playback = 1;
+ 			if (snd_soc_dai_stream_valid(codec_dai, SNDRV_PCM_STREAM_CAPTURE) &&
+-			    snd_soc_dai_stream_valid(cpu_dai,   SNDRV_PCM_STREAM_PLAYBACK))
++			    snd_soc_dai_stream_valid(cpu_dai,   cpu_capture))
+ 				capture = 1;
+ 		}
+-
+-		capture = capture && cpu_capture->channels_min;
+-		playback = playback && cpu_playback->channels_min;
+ 	}
+ 
+ 	if (rtd->dai_link->playback_only) {
+diff --git a/sound/soc/stm/stm32_sai_sub.c b/sound/soc/stm/stm32_sai_sub.c
+index d3259de43712..7e965848796c 100644
+--- a/sound/soc/stm/stm32_sai_sub.c
++++ b/sound/soc/stm/stm32_sai_sub.c
+@@ -1543,6 +1543,9 @@ static int stm32_sai_sub_probe(struct platform_device *pdev)
+ 		return ret;
+ 	}
+ 
++	if (STM_SAI_PROTOCOL_IS_SPDIF(sai))
++		conf = &stm32_sai_pcm_config_spdif;
++
+ 	ret = snd_dmaengine_pcm_register(&pdev->dev, conf, 0);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "Could not register pcm dma\n");
+@@ -1551,15 +1554,10 @@ static int stm32_sai_sub_probe(struct platform_device *pdev)
+ 
+ 	ret = snd_soc_register_component(&pdev->dev, &stm32_component,
+ 					 &sai->cpu_dai_drv, 1);
+-	if (ret) {
++	if (ret)
+ 		snd_dmaengine_pcm_unregister(&pdev->dev);
+-		return ret;
+-	}
+-
+-	if (STM_SAI_PROTOCOL_IS_SPDIF(sai))
+-		conf = &stm32_sai_pcm_config_spdif;
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ static int stm32_sai_sub_remove(struct platform_device *pdev)
+diff --git a/sound/soc/stm/stm32_spdifrx.c b/sound/soc/stm/stm32_spdifrx.c
+index 3769d9ce5dbe..e6e75897cce8 100644
+--- a/sound/soc/stm/stm32_spdifrx.c
++++ b/sound/soc/stm/stm32_spdifrx.c
+@@ -1009,6 +1009,8 @@ static int stm32_spdifrx_probe(struct platform_device *pdev)
+ 
+ 	if (idr == SPDIFRX_IPIDR_NUMBER) {
+ 		ret = regmap_read(spdifrx->regmap, STM32_SPDIFRX_VERR, &ver);
++		if (ret)
++			goto error;
+ 
+ 		dev_dbg(&pdev->dev, "SPDIFRX version: %lu.%lu registered\n",
+ 			FIELD_GET(SPDIFRX_VERR_MAJ_MASK, ver),
+diff --git a/tools/lib/bpf/netlink.c b/tools/lib/bpf/netlink.c
+index c364e4be5e6e..c1a7fc185940 100644
+--- a/tools/lib/bpf/netlink.c
++++ b/tools/lib/bpf/netlink.c
+@@ -141,7 +141,7 @@ int bpf_set_link_xdp_fd(int ifindex, int fd, __u32 flags)
+ 		struct ifinfomsg ifinfo;
+ 		char             attrbuf[64];
+ 	} req;
+-	__u32 nl_pid;
++	__u32 nl_pid = 0;
+ 
+ 	sock = libbpf_netlink_open(&nl_pid);
+ 	if (sock < 0)
+@@ -256,7 +256,7 @@ int bpf_get_link_xdp_info(int ifindex, struct xdp_link_info *info,
+ {
+ 	struct xdp_id_md xdp_id = {};
+ 	int sock, ret;
+-	__u32 nl_pid;
++	__u32 nl_pid = 0;
+ 	__u32 mask;
+ 
+ 	if (flags & ~XDP_FLAGS_MASK || !info_size)
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index 2b765bbbef92..95c485d3d4d8 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -2307,14 +2307,27 @@ static bool ignore_unreachable_insn(struct instruction *insn)
+ 	    !strcmp(insn->sec->name, ".altinstr_aux"))
+ 		return true;
+ 
++	if (!insn->func)
++		return false;
++
++	/*
++	 * CONFIG_UBSAN_TRAP inserts a UD2 when it sees
++	 * __builtin_unreachable().  The BUG() macro has an unreachable() after
++	 * the UD2, which causes GCC's undefined trap logic to emit another UD2
++	 * (or occasionally a JMP to UD2).
++	 */
++	if (list_prev_entry(insn, list)->dead_end &&
++	    (insn->type == INSN_BUG ||
++	     (insn->type == INSN_JUMP_UNCONDITIONAL &&
++	      insn->jump_dest && insn->jump_dest->type == INSN_BUG)))
++		return true;
++
+ 	/*
+ 	 * Check if this (or a subsequent) instruction is related to
+ 	 * CONFIG_UBSAN or CONFIG_KASAN.
+ 	 *
+ 	 * End the search at 5 instructions to avoid going into the weeds.
+ 	 */
+-	if (!insn->func)
+-		return false;
+ 	for (i = 0; i < 5; i++) {
+ 
+ 		if (is_kasan_insn(insn) || is_ubsan_insn(insn))
+diff --git a/tools/objtool/orc_dump.c b/tools/objtool/orc_dump.c
+index 13ccf775a83a..ba4cbb1cdd63 100644
+--- a/tools/objtool/orc_dump.c
++++ b/tools/objtool/orc_dump.c
+@@ -66,7 +66,7 @@ int orc_dump(const char *_objname)
+ 	char *name;
+ 	size_t nr_sections;
+ 	Elf64_Addr orc_ip_addr = 0;
+-	size_t shstrtab_idx;
++	size_t shstrtab_idx, strtab_idx = 0;
+ 	Elf *elf;
+ 	Elf_Scn *scn;
+ 	GElf_Shdr sh;
+@@ -127,6 +127,8 @@ int orc_dump(const char *_objname)
+ 
+ 		if (!strcmp(name, ".symtab")) {
+ 			symtab = data;
++		} else if (!strcmp(name, ".strtab")) {
++			strtab_idx = i;
+ 		} else if (!strcmp(name, ".orc_unwind")) {
+ 			orc = data->d_buf;
+ 			orc_size = sh.sh_size;
+@@ -138,7 +140,7 @@ int orc_dump(const char *_objname)
+ 		}
+ 	}
+ 
+-	if (!symtab || !orc || !orc_ip)
++	if (!symtab || !strtab_idx || !orc || !orc_ip)
+ 		return 0;
+ 
+ 	if (orc_size % sizeof(*orc) != 0) {
+@@ -159,21 +161,29 @@ int orc_dump(const char *_objname)
+ 				return -1;
+ 			}
+ 
+-			scn = elf_getscn(elf, sym.st_shndx);
+-			if (!scn) {
+-				WARN_ELF("elf_getscn");
+-				return -1;
+-			}
+-
+-			if (!gelf_getshdr(scn, &sh)) {
+-				WARN_ELF("gelf_getshdr");
+-				return -1;
+-			}
+-
+-			name = elf_strptr(elf, shstrtab_idx, sh.sh_name);
+-			if (!name || !*name) {
+-				WARN_ELF("elf_strptr");
+-				return -1;
++			if (GELF_ST_TYPE(sym.st_info) == STT_SECTION) {
++				scn = elf_getscn(elf, sym.st_shndx);
++				if (!scn) {
++					WARN_ELF("elf_getscn");
++					return -1;
++				}
++
++				if (!gelf_getshdr(scn, &sh)) {
++					WARN_ELF("gelf_getshdr");
++					return -1;
++				}
++
++				name = elf_strptr(elf, shstrtab_idx, sh.sh_name);
++				if (!name) {
++					WARN_ELF("elf_strptr");
++					return -1;
++				}
++			} else {
++				name = elf_strptr(elf, strtab_idx, sym.st_name);
++				if (!name) {
++					WARN_ELF("elf_strptr");
++					return -1;
++				}
+ 			}
+ 
+ 			printf("%s+%llx:", name, (unsigned long long)rela.r_addend);
+diff --git a/tools/testing/selftests/bpf/progs/test_btf_haskv.c b/tools/testing/selftests/bpf/progs/test_btf_haskv.c
+index 88b0566da13d..31538c9ed193 100644
+--- a/tools/testing/selftests/bpf/progs/test_btf_haskv.c
++++ b/tools/testing/selftests/bpf/progs/test_btf_haskv.c
+@@ -20,20 +20,12 @@ struct bpf_map_def SEC("maps") btf_map = {
+ 
+ BPF_ANNOTATE_KV_PAIR(btf_map, int, struct ipv_counts);
+ 
+-struct dummy_tracepoint_args {
+-	unsigned long long pad;
+-	struct sock *sock;
+-};
+-
+ __attribute__((noinline))
+-int test_long_fname_2(struct dummy_tracepoint_args *arg)
++int test_long_fname_2(void)
+ {
+ 	struct ipv_counts *counts;
+ 	int key = 0;
+ 
+-	if (!arg->sock)
+-		return 0;
+-
+ 	counts = bpf_map_lookup_elem(&btf_map, &key);
+ 	if (!counts)
+ 		return 0;
+@@ -44,15 +36,15 @@ int test_long_fname_2(struct dummy_tracepoint_args *arg)
+ }
+ 
+ __attribute__((noinline))
+-int test_long_fname_1(struct dummy_tracepoint_args *arg)
++int test_long_fname_1(void)
+ {
+-	return test_long_fname_2(arg);
++	return test_long_fname_2();
+ }
+ 
+ SEC("dummy_tracepoint")
+-int _dummy_tracepoint(struct dummy_tracepoint_args *arg)
++int _dummy_tracepoint(void *arg)
+ {
+-	return test_long_fname_1(arg);
++	return test_long_fname_1();
+ }
+ 
+ char _license[] SEC("license") = "GPL";
+diff --git a/tools/testing/selftests/bpf/progs/test_btf_newkv.c b/tools/testing/selftests/bpf/progs/test_btf_newkv.c
+index a924e53c8e9d..6c5560162746 100644
+--- a/tools/testing/selftests/bpf/progs/test_btf_newkv.c
++++ b/tools/testing/selftests/bpf/progs/test_btf_newkv.c
+@@ -28,20 +28,12 @@ struct {
+ 	__type(value, struct ipv_counts);
+ } btf_map SEC(".maps");
+ 
+-struct dummy_tracepoint_args {
+-	unsigned long long pad;
+-	struct sock *sock;
+-};
+-
+ __attribute__((noinline))
+-int test_long_fname_2(struct dummy_tracepoint_args *arg)
++int test_long_fname_2(void)
+ {
+ 	struct ipv_counts *counts;
+ 	int key = 0;
+ 
+-	if (!arg->sock)
+-		return 0;
+-
+ 	counts = bpf_map_lookup_elem(&btf_map, &key);
+ 	if (!counts)
+ 		return 0;
+@@ -57,15 +49,15 @@ int test_long_fname_2(struct dummy_tracepoint_args *arg)
+ }
+ 
+ __attribute__((noinline))
+-int test_long_fname_1(struct dummy_tracepoint_args *arg)
++int test_long_fname_1(void)
+ {
+-	return test_long_fname_2(arg);
++	return test_long_fname_2();
+ }
+ 
+ SEC("dummy_tracepoint")
+-int _dummy_tracepoint(struct dummy_tracepoint_args *arg)
++int _dummy_tracepoint(void *arg)
+ {
+-	return test_long_fname_1(arg);
++	return test_long_fname_1();
+ }
+ 
+ char _license[] SEC("license") = "GPL";
+diff --git a/tools/testing/selftests/bpf/progs/test_btf_nokv.c b/tools/testing/selftests/bpf/progs/test_btf_nokv.c
+index 983aedd1c072..506da7fd2da2 100644
+--- a/tools/testing/selftests/bpf/progs/test_btf_nokv.c
++++ b/tools/testing/selftests/bpf/progs/test_btf_nokv.c
+@@ -17,20 +17,12 @@ struct bpf_map_def SEC("maps") btf_map = {
+ 	.max_entries = 4,
+ };
+ 
+-struct dummy_tracepoint_args {
+-	unsigned long long pad;
+-	struct sock *sock;
+-};
+-
+ __attribute__((noinline))
+-int test_long_fname_2(struct dummy_tracepoint_args *arg)
++int test_long_fname_2(void)
+ {
+ 	struct ipv_counts *counts;
+ 	int key = 0;
+ 
+-	if (!arg->sock)
+-		return 0;
+-
+ 	counts = bpf_map_lookup_elem(&btf_map, &key);
+ 	if (!counts)
+ 		return 0;
+@@ -41,15 +33,15 @@ int test_long_fname_2(struct dummy_tracepoint_args *arg)
+ }
+ 
+ __attribute__((noinline))
+-int test_long_fname_1(struct dummy_tracepoint_args *arg)
++int test_long_fname_1(void)
+ {
+-	return test_long_fname_2(arg);
++	return test_long_fname_2();
+ }
+ 
+ SEC("dummy_tracepoint")
+-int _dummy_tracepoint(struct dummy_tracepoint_args *arg)
++int _dummy_tracepoint(void *arg)
+ {
+-	return test_long_fname_1(arg);
++	return test_long_fname_1();
+ }
+ 
+ char _license[] SEC("license") = "GPL";
+diff --git a/tools/testing/selftests/bpf/test_btf.c b/tools/testing/selftests/bpf/test_btf.c
+index 8da77cda5f4a..305fae8f80a9 100644
+--- a/tools/testing/selftests/bpf/test_btf.c
++++ b/tools/testing/selftests/bpf/test_btf.c
+@@ -2854,7 +2854,7 @@ static struct btf_raw_test raw_tests[] = {
+ 	.value_type_id = 1,
+ 	.max_entries = 4,
+ 	.btf_load_err = true,
+-	.err_str = "vlen != 0",
++	.err_str = "Invalid func linkage",
+ },
+ 
+ {
+diff --git a/tools/testing/selftests/bpf/verifier/value_illegal_alu.c b/tools/testing/selftests/bpf/verifier/value_illegal_alu.c
+index 7f6c232cd842..ed1c2cea1dea 100644
+--- a/tools/testing/selftests/bpf/verifier/value_illegal_alu.c
++++ b/tools/testing/selftests/bpf/verifier/value_illegal_alu.c
+@@ -88,6 +88,7 @@
+ 	BPF_EXIT_INSN(),
+ 	},
+ 	.fixup_map_hash_48b = { 3 },
++	.errstr_unpriv = "leaking pointer from stack off -8",
+ 	.errstr = "R0 invalid mem access 'inv'",
+ 	.result = REJECT,
+ 	.flags = F_NEEDS_EFFICIENT_UNALIGNED_ACCESS,


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [gentoo-commits] proj/linux-patches:5.6 commit in: /
@ 2020-05-02 19:25 Mike Pagano
  0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2020-05-02 19:25 UTC (permalink / raw
  To: gentoo-commits

commit:     5686f4b988cb74f62b7d571baf82575518356710
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat May  2 19:25:30 2020 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat May  2 19:25:30 2020 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=5686f4b9

Linux patch 5.6.10

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |  4 ++++
 1009_linux-5.6.10.patch | 29 +++++++++++++++++++++++++++++
 2 files changed, 33 insertions(+)

diff --git a/0000_README b/0000_README
index 8794f80..25aa563 100644
--- a/0000_README
+++ b/0000_README
@@ -79,6 +79,10 @@ Patch:  1008_linux-5.6.9.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.6.9
 
+Patch:  1009_linux-5.6.10.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.6.10
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1009_linux-5.6.10.patch b/1009_linux-5.6.10.patch
new file mode 100644
index 0000000..710dd6b
--- /dev/null
+++ b/1009_linux-5.6.10.patch
@@ -0,0 +1,29 @@
+diff --git a/Makefile b/Makefile
+index 2fc8ba07d930..4b29cc9769e8 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 6
+-SUBLEVEL = 9
++SUBLEVEL = 10
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+ 
+diff --git a/sound/soc/meson/axg-card.c b/sound/soc/meson/axg-card.c
+index 2b04ac3d8fd3..1f698adde506 100644
+--- a/sound/soc/meson/axg-card.c
++++ b/sound/soc/meson/axg-card.c
+@@ -586,10 +586,8 @@ static int axg_card_add_link(struct snd_soc_card *card, struct device_node *np,
+ 
+ 	if (axg_card_cpu_is_tdm_iface(dai_link->cpus->of_node))
+ 		ret = axg_card_parse_tdm(card, np, index);
+-	else if (axg_card_cpu_is_codec(dai_link->cpus->of_node)) {
++	else if (axg_card_cpu_is_codec(dai_link->cpus->of_node))
+ 		dai_link->params = &codec_params;
+-		dai_link->no_pcm = 0; /* link is not a DPCM BE */
+-	}
+ 
+ 	return ret;
+ }


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [gentoo-commits] proj/linux-patches:5.6 commit in: /
@ 2020-05-06 11:47 Mike Pagano
  0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2020-05-06 11:47 UTC (permalink / raw
  To: gentoo-commits

commit:     769023a25fa060b150715c78c10a6d44ab515704
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed May  6 11:47:42 2020 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed May  6 11:47:42 2020 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=769023a2

Linux patch 5.6.11

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1010_linux-5.6.11.patch | 2447 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 2451 insertions(+)

diff --git a/0000_README b/0000_README
index 25aa563..13f0a7d 100644
--- a/0000_README
+++ b/0000_README
@@ -83,6 +83,10 @@ Patch:  1009_linux-5.6.10.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.6.10
 
+Patch:  1010_linux-5.6.11.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.6.11
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1010_linux-5.6.11.patch b/1010_linux-5.6.11.patch
new file mode 100644
index 0000000..7ada05b
--- /dev/null
+++ b/1010_linux-5.6.11.patch
@@ -0,0 +1,2447 @@
+diff --git a/Makefile b/Makefile
+index 4b29cc9769e8..5dedd6f9ad75 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 6
+-SUBLEVEL = 10
++SUBLEVEL = 11
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+ 
+diff --git a/arch/arm/boot/dts/imx6qdl-sr-som-ti.dtsi b/arch/arm/boot/dts/imx6qdl-sr-som-ti.dtsi
+index 44a97ba93a95..352ac585ca6b 100644
+--- a/arch/arm/boot/dts/imx6qdl-sr-som-ti.dtsi
++++ b/arch/arm/boot/dts/imx6qdl-sr-som-ti.dtsi
+@@ -153,6 +153,7 @@
+ 	bus-width = <4>;
+ 	keep-power-in-suspend;
+ 	mmc-pwrseq = <&pwrseq_ti_wifi>;
++	cap-power-off-card;
+ 	non-removable;
+ 	vmmc-supply = <&vcc_3v3>;
+ 	/* vqmmc-supply = <&nvcc_sd1>; - MMC layer doesn't like it! */
+diff --git a/arch/arm64/kernel/vdso/Makefile b/arch/arm64/kernel/vdso/Makefile
+index dd2514bb1511..3862cad2410c 100644
+--- a/arch/arm64/kernel/vdso/Makefile
++++ b/arch/arm64/kernel/vdso/Makefile
+@@ -32,7 +32,7 @@ UBSAN_SANITIZE			:= n
+ OBJECT_FILES_NON_STANDARD	:= y
+ KCOV_INSTRUMENT			:= n
+ 
+-CFLAGS_vgettimeofday.o = -O2 -mcmodel=tiny
++CFLAGS_vgettimeofday.o = -O2 -mcmodel=tiny -fasynchronous-unwind-tables
+ 
+ ifneq ($(c-gettimeofday-y),)
+   CFLAGS_vgettimeofday.o += -include $(c-gettimeofday-y)
+diff --git a/arch/x86/hyperv/hv_init.c b/arch/x86/hyperv/hv_init.c
+index 624f5d9b0f79..fd51bac11b46 100644
+--- a/arch/x86/hyperv/hv_init.c
++++ b/arch/x86/hyperv/hv_init.c
+@@ -73,7 +73,8 @@ static int hv_cpu_init(unsigned int cpu)
+ 	struct page *pg;
+ 
+ 	input_arg = (void **)this_cpu_ptr(hyperv_pcpu_input_arg);
+-	pg = alloc_page(GFP_KERNEL);
++	/* hv_cpu_init() can be called with IRQs disabled from hv_resume() */
++	pg = alloc_page(irqs_disabled() ? GFP_ATOMIC : GFP_KERNEL);
+ 	if (unlikely(!pg))
+ 		return -ENOMEM;
+ 	*input_arg = page_address(pg);
+@@ -254,6 +255,7 @@ static int __init hv_pci_init(void)
+ static int hv_suspend(void)
+ {
+ 	union hv_x64_msr_hypercall_contents hypercall_msr;
++	int ret;
+ 
+ 	/*
+ 	 * Reset the hypercall page as it is going to be invalidated
+@@ -270,12 +272,17 @@ static int hv_suspend(void)
+ 	hypercall_msr.enable = 0;
+ 	wrmsrl(HV_X64_MSR_HYPERCALL, hypercall_msr.as_uint64);
+ 
+-	return 0;
++	ret = hv_cpu_die(0);
++	return ret;
+ }
+ 
+ static void hv_resume(void)
+ {
+ 	union hv_x64_msr_hypercall_contents hypercall_msr;
++	int ret;
++
++	ret = hv_cpu_init(0);
++	WARN_ON(ret);
+ 
+ 	/* Re-enable the hypercall page */
+ 	rdmsrl(HV_X64_MSR_HYPERCALL, hypercall_msr.as_uint64);
+@@ -288,6 +295,7 @@ static void hv_resume(void)
+ 	hv_hypercall_pg_saved = NULL;
+ }
+ 
++/* Note: when the ops are called, only CPU0 is online and IRQs are disabled. */
+ static struct syscore_ops hv_syscore_ops = {
+ 	.suspend	= hv_suspend,
+ 	.resume		= hv_resume,
+diff --git a/block/partition-generic.c b/block/partition-generic.c
+index ebe4c2e9834b..8a7906fa96fd 100644
+--- a/block/partition-generic.c
++++ b/block/partition-generic.c
+@@ -468,7 +468,7 @@ int blk_drop_partitions(struct gendisk *disk, struct block_device *bdev)
+ 
+ 	if (!disk_part_scan_enabled(disk))
+ 		return 0;
+-	if (bdev->bd_part_count || bdev->bd_openers > 1)
++	if (bdev->bd_part_count)
+ 		return -EBUSY;
+ 	res = invalidate_partition(disk, 0);
+ 	if (res)
+diff --git a/drivers/acpi/device_pm.c b/drivers/acpi/device_pm.c
+index b2263ec67b43..5832bc10aca8 100644
+--- a/drivers/acpi/device_pm.c
++++ b/drivers/acpi/device_pm.c
+@@ -273,13 +273,13 @@ int acpi_device_set_power(struct acpi_device *device, int state)
+  end:
+ 	if (result) {
+ 		dev_warn(&device->dev, "Failed to change power state to %s\n",
+-			 acpi_power_state_string(state));
++			 acpi_power_state_string(target_state));
+ 	} else {
+ 		device->power.state = target_state;
+ 		ACPI_DEBUG_PRINT((ACPI_DB_INFO,
+ 				  "Device [%s] transitioned to %s\n",
+ 				  device->pnp.bus_id,
+-				  acpi_power_state_string(state)));
++				  acpi_power_state_string(target_state)));
+ 	}
+ 
+ 	return result;
+diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c
+index ef1a65f4fc92..11ae7f1ff30d 100644
+--- a/drivers/crypto/caam/caamalg.c
++++ b/drivers/crypto/caam/caamalg.c
+@@ -1791,7 +1791,7 @@ static struct skcipher_edesc *skcipher_edesc_alloc(struct skcipher_request *req,
+ 
+ 	if (ivsize || mapped_dst_nents > 1)
+ 		sg_to_sec4_set_last(edesc->sec4_sg + dst_sg_idx +
+-				    mapped_dst_nents);
++				    mapped_dst_nents - 1 + !!ivsize);
+ 
+ 	if (sec4_sg_bytes) {
+ 		edesc->sec4_sg_dma = dma_map_single(jrdev, edesc->sec4_sg,
+diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
+index c343c7c10b4c..e7589d91de8f 100644
+--- a/drivers/dma-buf/dma-buf.c
++++ b/drivers/dma-buf/dma-buf.c
+@@ -388,7 +388,8 @@ static long dma_buf_ioctl(struct file *file,
+ 
+ 		return ret;
+ 
+-	case DMA_BUF_SET_NAME:
++	case DMA_BUF_SET_NAME_A:
++	case DMA_BUF_SET_NAME_B:
+ 		return dma_buf_set_name(dmabuf, (const char __user *)arg);
+ 
+ 	default:
+diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
+index 5142da401db3..c7e1dfe81d1e 100644
+--- a/drivers/dma/Kconfig
++++ b/drivers/dma/Kconfig
+@@ -241,7 +241,8 @@ config FSL_RAID
+ 
+ config HISI_DMA
+ 	tristate "HiSilicon DMA Engine support"
+-	depends on ARM64 || (COMPILE_TEST && PCI_MSI)
++	depends on ARM64 || COMPILE_TEST
++	depends on PCI_MSI
+ 	select DMA_ENGINE
+ 	select DMA_VIRTUAL_CHANNELS
+ 	help
+diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
+index 17909fd1820f..b5c4926aa76e 100644
+--- a/drivers/dma/dmaengine.c
++++ b/drivers/dma/dmaengine.c
+@@ -151,10 +151,6 @@ static void chan_dev_release(struct device *dev)
+ 	struct dma_chan_dev *chan_dev;
+ 
+ 	chan_dev = container_of(dev, typeof(*chan_dev), device);
+-	if (atomic_dec_and_test(chan_dev->idr_ref)) {
+-		ida_free(&dma_ida, chan_dev->dev_id);
+-		kfree(chan_dev->idr_ref);
+-	}
+ 	kfree(chan_dev);
+ }
+ 
+@@ -952,27 +948,9 @@ static int get_dma_id(struct dma_device *device)
+ }
+ 
+ static int __dma_async_device_channel_register(struct dma_device *device,
+-					       struct dma_chan *chan,
+-					       int chan_id)
++					       struct dma_chan *chan)
+ {
+ 	int rc = 0;
+-	int chancnt = device->chancnt;
+-	atomic_t *idr_ref;
+-	struct dma_chan *tchan;
+-
+-	tchan = list_first_entry_or_null(&device->channels,
+-					 struct dma_chan, device_node);
+-	if (!tchan)
+-		return -ENODEV;
+-
+-	if (tchan->dev) {
+-		idr_ref = tchan->dev->idr_ref;
+-	} else {
+-		idr_ref = kmalloc(sizeof(*idr_ref), GFP_KERNEL);
+-		if (!idr_ref)
+-			return -ENOMEM;
+-		atomic_set(idr_ref, 0);
+-	}
+ 
+ 	chan->local = alloc_percpu(typeof(*chan->local));
+ 	if (!chan->local)
+@@ -988,29 +966,36 @@ static int __dma_async_device_channel_register(struct dma_device *device,
+ 	 * When the chan_id is a negative value, we are dynamically adding
+ 	 * the channel. Otherwise we are static enumerating.
+ 	 */
+-	chan->chan_id = chan_id < 0 ? chancnt : chan_id;
++	mutex_lock(&device->chan_mutex);
++	chan->chan_id = ida_alloc(&device->chan_ida, GFP_KERNEL);
++	mutex_unlock(&device->chan_mutex);
++	if (chan->chan_id < 0) {
++		pr_err("%s: unable to alloc ida for chan: %d\n",
++		       __func__, chan->chan_id);
++		goto err_out;
++	}
++
+ 	chan->dev->device.class = &dma_devclass;
+ 	chan->dev->device.parent = device->dev;
+ 	chan->dev->chan = chan;
+-	chan->dev->idr_ref = idr_ref;
+ 	chan->dev->dev_id = device->dev_id;
+-	atomic_inc(idr_ref);
+ 	dev_set_name(&chan->dev->device, "dma%dchan%d",
+ 		     device->dev_id, chan->chan_id);
+-
+ 	rc = device_register(&chan->dev->device);
+ 	if (rc)
+-		goto err_out;
++		goto err_out_ida;
+ 	chan->client_count = 0;
+-	device->chancnt = chan->chan_id + 1;
++	device->chancnt++;
+ 
+ 	return 0;
+ 
++ err_out_ida:
++	mutex_lock(&device->chan_mutex);
++	ida_free(&device->chan_ida, chan->chan_id);
++	mutex_unlock(&device->chan_mutex);
+  err_out:
+ 	free_percpu(chan->local);
+ 	kfree(chan->dev);
+-	if (atomic_dec_return(idr_ref) == 0)
+-		kfree(idr_ref);
+ 	return rc;
+ }
+ 
+@@ -1019,7 +1004,7 @@ int dma_async_device_channel_register(struct dma_device *device,
+ {
+ 	int rc;
+ 
+-	rc = __dma_async_device_channel_register(device, chan, -1);
++	rc = __dma_async_device_channel_register(device, chan);
+ 	if (rc < 0)
+ 		return rc;
+ 
+@@ -1039,6 +1024,9 @@ static void __dma_async_device_channel_unregister(struct dma_device *device,
+ 	device->chancnt--;
+ 	chan->dev->chan = NULL;
+ 	mutex_unlock(&dma_list_mutex);
++	mutex_lock(&device->chan_mutex);
++	ida_free(&device->chan_ida, chan->chan_id);
++	mutex_unlock(&device->chan_mutex);
+ 	device_unregister(&chan->dev->device);
+ 	free_percpu(chan->local);
+ }
+@@ -1061,7 +1049,7 @@ EXPORT_SYMBOL_GPL(dma_async_device_channel_unregister);
+  */
+ int dma_async_device_register(struct dma_device *device)
+ {
+-	int rc, i = 0;
++	int rc;
+ 	struct dma_chan* chan;
+ 
+ 	if (!device)
+@@ -1166,9 +1154,12 @@ int dma_async_device_register(struct dma_device *device)
+ 	if (rc != 0)
+ 		return rc;
+ 
++	mutex_init(&device->chan_mutex);
++	ida_init(&device->chan_ida);
++
+ 	/* represent channels in sysfs. Probably want devs too */
+ 	list_for_each_entry(chan, &device->channels, device_node) {
+-		rc = __dma_async_device_channel_register(device, chan, i++);
++		rc = __dma_async_device_channel_register(device, chan);
+ 		if (rc < 0)
+ 			goto err_out;
+ 	}
+@@ -1239,6 +1230,7 @@ void dma_async_device_unregister(struct dma_device *device)
+ 	 */
+ 	dma_cap_set(DMA_PRIVATE, device->cap_mask);
+ 	dma_channel_rebalance();
++	ida_free(&dma_ida, device->dev_id);
+ 	dma_device_put(device);
+ 	mutex_unlock(&dma_list_mutex);
+ }
+diff --git a/drivers/dma/dmatest.c b/drivers/dma/dmatest.c
+index a2cadfa2e6d7..364dd34799d4 100644
+--- a/drivers/dma/dmatest.c
++++ b/drivers/dma/dmatest.c
+@@ -240,7 +240,7 @@ static bool is_threaded_test_run(struct dmatest_info *info)
+ 		struct dmatest_thread *thread;
+ 
+ 		list_for_each_entry(thread, &dtc->threads, node) {
+-			if (!thread->done)
++			if (!thread->done && !thread->pending)
+ 				return true;
+ 		}
+ 	}
+@@ -662,8 +662,8 @@ static int dmatest_func(void *data)
+ 		flags = DMA_CTRL_ACK | DMA_PREP_INTERRUPT;
+ 
+ 	ktime = ktime_get();
+-	while (!kthread_should_stop()
+-	       && !(params->iterations && total_tests >= params->iterations)) {
++	while (!(kthread_should_stop() ||
++	       (params->iterations && total_tests >= params->iterations))) {
+ 		struct dma_async_tx_descriptor *tx = NULL;
+ 		struct dmaengine_unmap_data *um;
+ 		dma_addr_t *dsts;
+diff --git a/drivers/dma/ti/k3-psil.c b/drivers/dma/ti/k3-psil.c
+index d7b965049ccb..fb7c8150b0d1 100644
+--- a/drivers/dma/ti/k3-psil.c
++++ b/drivers/dma/ti/k3-psil.c
+@@ -27,6 +27,7 @@ struct psil_endpoint_config *psil_get_ep_config(u32 thread_id)
+ 			soc_ep_map = &j721e_ep_map;
+ 		} else {
+ 			pr_err("PSIL: No compatible machine found for map\n");
++			mutex_unlock(&ep_map_mutex);
+ 			return ERR_PTR(-ENOTSUPP);
+ 		}
+ 		pr_debug("%s: Using map for %s\n", __func__, soc_ep_map->name);
+diff --git a/drivers/gpu/drm/amd/amdgpu/navi10_sdma_pkt_open.h b/drivers/gpu/drm/amd/amdgpu/navi10_sdma_pkt_open.h
+index 074a9a09c0a7..a5b60c9a2418 100644
+--- a/drivers/gpu/drm/amd/amdgpu/navi10_sdma_pkt_open.h
++++ b/drivers/gpu/drm/amd/amdgpu/navi10_sdma_pkt_open.h
+@@ -73,6 +73,22 @@
+ #define SDMA_OP_AQL_COPY  0
+ #define SDMA_OP_AQL_BARRIER_OR  0
+ 
++#define SDMA_GCR_RANGE_IS_PA		(1 << 18)
++#define SDMA_GCR_SEQ(x)			(((x) & 0x3) << 16)
++#define SDMA_GCR_GL2_WB			(1 << 15)
++#define SDMA_GCR_GL2_INV		(1 << 14)
++#define SDMA_GCR_GL2_DISCARD		(1 << 13)
++#define SDMA_GCR_GL2_RANGE(x)		(((x) & 0x3) << 11)
++#define SDMA_GCR_GL2_US			(1 << 10)
++#define SDMA_GCR_GL1_INV		(1 << 9)
++#define SDMA_GCR_GLV_INV		(1 << 8)
++#define SDMA_GCR_GLK_INV		(1 << 7)
++#define SDMA_GCR_GLK_WB			(1 << 6)
++#define SDMA_GCR_GLM_INV		(1 << 5)
++#define SDMA_GCR_GLM_WB			(1 << 4)
++#define SDMA_GCR_GL1_RANGE(x)		(((x) & 0x3) << 2)
++#define SDMA_GCR_GLI_INV(x)		(((x) & 0x3) << 0)
++
+ /*define for op field*/
+ #define SDMA_PKT_HEADER_op_offset 0
+ #define SDMA_PKT_HEADER_op_mask   0x000000FF
+diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
+index 67b9830b7c7e..ddc8b217e8c6 100644
+--- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
+@@ -382,6 +382,18 @@ static void sdma_v5_0_ring_emit_ib(struct amdgpu_ring *ring,
+ 	unsigned vmid = AMDGPU_JOB_GET_VMID(job);
+ 	uint64_t csa_mc_addr = amdgpu_sdma_get_csa_mc_addr(ring, vmid);
+ 
++	/* Invalidate L2, because if we don't do it, we might get stale cache
++	 * lines from previous IBs.
++	 */
++	amdgpu_ring_write(ring, SDMA_PKT_HEADER_OP(SDMA_OP_GCR_REQ));
++	amdgpu_ring_write(ring, 0);
++	amdgpu_ring_write(ring, (SDMA_GCR_GL2_INV |
++				 SDMA_GCR_GL2_WB |
++				 SDMA_GCR_GLM_INV |
++				 SDMA_GCR_GLM_WB) << 16);
++	amdgpu_ring_write(ring, 0xffffff80);
++	amdgpu_ring_write(ring, 0xffff);
++
+ 	/* An IB packet must end on a 8 DW boundary--the next dword
+ 	 * must be on a 8-dword boundary. Our IB packet below is 6
+ 	 * dwords long, thus add x number of NOPs, such that, in
+@@ -1597,7 +1609,7 @@ static const struct amdgpu_ring_funcs sdma_v5_0_ring_funcs = {
+ 		SOC15_FLUSH_GPU_TLB_NUM_WREG * 3 +
+ 		SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 6 * 2 +
+ 		10 + 10 + 10, /* sdma_v5_0_ring_emit_fence x3 for user fence, vm fence */
+-	.emit_ib_size = 7 + 6, /* sdma_v5_0_ring_emit_ib */
++	.emit_ib_size = 5 + 7 + 6, /* sdma_v5_0_ring_emit_ib */
+ 	.emit_ib = sdma_v5_0_ring_emit_ib,
+ 	.emit_fence = sdma_v5_0_ring_emit_fence,
+ 	.emit_pipeline_sync = sdma_v5_0_ring_emit_pipeline_sync,
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 6240259b3a93..8136a58deb39 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -3212,7 +3212,8 @@ fill_plane_dcc_attributes(struct amdgpu_device *adev,
+ 			  const union dc_tiling_info *tiling_info,
+ 			  const uint64_t info,
+ 			  struct dc_plane_dcc_param *dcc,
+-			  struct dc_plane_address *address)
++			  struct dc_plane_address *address,
++			  bool force_disable_dcc)
+ {
+ 	struct dc *dc = adev->dm.dc;
+ 	struct dc_dcc_surface_param input;
+@@ -3224,6 +3225,9 @@ fill_plane_dcc_attributes(struct amdgpu_device *adev,
+ 	memset(&input, 0, sizeof(input));
+ 	memset(&output, 0, sizeof(output));
+ 
++	if (force_disable_dcc)
++		return 0;
++
+ 	if (!offset)
+ 		return 0;
+ 
+@@ -3273,7 +3277,8 @@ fill_plane_buffer_attributes(struct amdgpu_device *adev,
+ 			     union dc_tiling_info *tiling_info,
+ 			     struct plane_size *plane_size,
+ 			     struct dc_plane_dcc_param *dcc,
+-			     struct dc_plane_address *address)
++			     struct dc_plane_address *address,
++			     bool force_disable_dcc)
+ {
+ 	const struct drm_framebuffer *fb = &afb->base;
+ 	int ret;
+@@ -3379,7 +3384,8 @@ fill_plane_buffer_attributes(struct amdgpu_device *adev,
+ 
+ 		ret = fill_plane_dcc_attributes(adev, afb, format, rotation,
+ 						plane_size, tiling_info,
+-						tiling_flags, dcc, address);
++						tiling_flags, dcc, address,
++						force_disable_dcc);
+ 		if (ret)
+ 			return ret;
+ 	}
+@@ -3471,7 +3477,8 @@ fill_dc_plane_info_and_addr(struct amdgpu_device *adev,
+ 			    const struct drm_plane_state *plane_state,
+ 			    const uint64_t tiling_flags,
+ 			    struct dc_plane_info *plane_info,
+-			    struct dc_plane_address *address)
++			    struct dc_plane_address *address,
++			    bool force_disable_dcc)
+ {
+ 	const struct drm_framebuffer *fb = plane_state->fb;
+ 	const struct amdgpu_framebuffer *afb =
+@@ -3550,7 +3557,8 @@ fill_dc_plane_info_and_addr(struct amdgpu_device *adev,
+ 					   plane_info->rotation, tiling_flags,
+ 					   &plane_info->tiling_info,
+ 					   &plane_info->plane_size,
+-					   &plane_info->dcc, address);
++					   &plane_info->dcc, address,
++					   force_disable_dcc);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -3573,6 +3581,7 @@ static int fill_dc_plane_attributes(struct amdgpu_device *adev,
+ 	struct dc_plane_info plane_info;
+ 	uint64_t tiling_flags;
+ 	int ret;
++	bool force_disable_dcc = false;
+ 
+ 	ret = fill_dc_scaling_info(plane_state, &scaling_info);
+ 	if (ret)
+@@ -3587,9 +3596,11 @@ static int fill_dc_plane_attributes(struct amdgpu_device *adev,
+ 	if (ret)
+ 		return ret;
+ 
++	force_disable_dcc = adev->asic_type == CHIP_RAVEN && adev->in_suspend;
+ 	ret = fill_dc_plane_info_and_addr(adev, plane_state, tiling_flags,
+ 					  &plane_info,
+-					  &dc_plane_state->address);
++					  &dc_plane_state->address,
++					  force_disable_dcc);
+ 	if (ret)
+ 		return ret;
+ 
+@@ -5171,6 +5182,7 @@ static int dm_plane_helper_prepare_fb(struct drm_plane *plane,
+ 	uint64_t tiling_flags;
+ 	uint32_t domain;
+ 	int r;
++	bool force_disable_dcc = false;
+ 
+ 	dm_plane_state_old = to_dm_plane_state(plane->state);
+ 	dm_plane_state_new = to_dm_plane_state(new_state);
+@@ -5229,11 +5241,13 @@ static int dm_plane_helper_prepare_fb(struct drm_plane *plane,
+ 			dm_plane_state_old->dc_state != dm_plane_state_new->dc_state) {
+ 		struct dc_plane_state *plane_state = dm_plane_state_new->dc_state;
+ 
++		force_disable_dcc = adev->asic_type == CHIP_RAVEN && adev->in_suspend;
+ 		fill_plane_buffer_attributes(
+ 			adev, afb, plane_state->format, plane_state->rotation,
+ 			tiling_flags, &plane_state->tiling_info,
+ 			&plane_state->plane_size, &plane_state->dcc,
+-			&plane_state->address);
++			&plane_state->address,
++			force_disable_dcc);
+ 	}
+ 
+ 	return 0;
+@@ -6514,7 +6528,12 @@ static void amdgpu_dm_commit_planes(struct drm_atomic_state *state,
+ 		fill_dc_plane_info_and_addr(
+ 			dm->adev, new_plane_state, tiling_flags,
+ 			&bundle->plane_infos[planes_count],
+-			&bundle->flip_addrs[planes_count].address);
++			&bundle->flip_addrs[planes_count].address,
++			false);
++
++		DRM_DEBUG_DRIVER("plane: id=%d dcc_en=%d\n",
++				 new_plane_state->plane->index,
++				 bundle->plane_infos[planes_count].dcc.enable);
+ 
+ 		bundle->surface_updates[planes_count].plane_info =
+ 			&bundle->plane_infos[planes_count];
+@@ -7935,7 +7954,8 @@ dm_determine_update_type_for_commit(struct amdgpu_display_manager *dm,
+ 				ret = fill_dc_plane_info_and_addr(
+ 					dm->adev, new_plane_state, tiling_flags,
+ 					plane_info,
+-					&flip_addr->address);
++					&flip_addr->address,
++					false);
+ 				if (ret)
+ 					goto cleanup;
+ 
+diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
+index 805fb004c8eb..079800a07d6e 100644
+--- a/drivers/gpu/drm/drm_edid.c
++++ b/drivers/gpu/drm/drm_edid.c
+@@ -5009,7 +5009,7 @@ static struct drm_display_mode *drm_mode_displayid_detailed(struct drm_device *d
+ 	struct drm_display_mode *mode;
+ 	unsigned pixel_clock = (timings->pixel_clock[0] |
+ 				(timings->pixel_clock[1] << 8) |
+-				(timings->pixel_clock[2] << 16));
++				(timings->pixel_clock[2] << 16)) + 1;
+ 	unsigned hactive = (timings->hactive[0] | timings->hactive[1] << 8) + 1;
+ 	unsigned hblank = (timings->hblank[0] | timings->hblank[1] << 8) + 1;
+ 	unsigned hsync = (timings->hsync[0] | (timings->hsync[1] & 0x7f) << 8) + 1;
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_tiling.c b/drivers/gpu/drm/i915/gem/i915_gem_tiling.c
+index 6c7825a2dc2a..b032d66d7c13 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_tiling.c
++++ b/drivers/gpu/drm/i915/gem/i915_gem_tiling.c
+@@ -183,21 +183,35 @@ i915_gem_object_fence_prepare(struct drm_i915_gem_object *obj,
+ 			      int tiling_mode, unsigned int stride)
+ {
+ 	struct i915_ggtt *ggtt = &to_i915(obj->base.dev)->ggtt;
+-	struct i915_vma *vma;
++	struct i915_vma *vma, *vn;
++	LIST_HEAD(unbind);
+ 	int ret = 0;
+ 
+ 	if (tiling_mode == I915_TILING_NONE)
+ 		return 0;
+ 
+ 	mutex_lock(&ggtt->vm.mutex);
++
++	spin_lock(&obj->vma.lock);
+ 	for_each_ggtt_vma(vma, obj) {
++		GEM_BUG_ON(vma->vm != &ggtt->vm);
++
+ 		if (i915_vma_fence_prepare(vma, tiling_mode, stride))
+ 			continue;
+ 
++		list_move(&vma->vm_link, &unbind);
++	}
++	spin_unlock(&obj->vma.lock);
++
++	list_for_each_entry_safe(vma, vn, &unbind, vm_link) {
+ 		ret = __i915_vma_unbind(vma);
+-		if (ret)
++		if (ret) {
++			/* Restore the remaining vma on an error */
++			list_splice(&unbind, &ggtt->vm.bound_list);
+ 			break;
++		}
+ 	}
++
+ 	mutex_unlock(&ggtt->vm.mutex);
+ 
+ 	return ret;
+@@ -269,6 +283,7 @@ i915_gem_object_set_tiling(struct drm_i915_gem_object *obj,
+ 	}
+ 	mutex_unlock(&obj->mm.lock);
+ 
++	spin_lock(&obj->vma.lock);
+ 	for_each_ggtt_vma(vma, obj) {
+ 		vma->fence_size =
+ 			i915_gem_fence_size(i915, vma->size, tiling, stride);
+@@ -279,6 +294,7 @@ i915_gem_object_set_tiling(struct drm_i915_gem_object *obj,
+ 		if (vma->fence)
+ 			vma->fence->dirty = true;
+ 	}
++	spin_unlock(&obj->vma.lock);
+ 
+ 	obj->tiling_and_stride = tiling | stride;
+ 	i915_gem_object_unlock(obj);
+diff --git a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
+index 9311250d7d6f..7a7763be6b2e 100644
+--- a/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
++++ b/drivers/gpu/drm/i915/gem/selftests/huge_pages.c
+@@ -1578,8 +1578,10 @@ static int igt_ppgtt_pin_update(void *arg)
+ 		unsigned int page_size = BIT(first);
+ 
+ 		obj = i915_gem_object_create_internal(dev_priv, page_size);
+-		if (IS_ERR(obj))
+-			return PTR_ERR(obj);
++		if (IS_ERR(obj)) {
++			err = PTR_ERR(obj);
++			goto out_vm;
++		}
+ 
+ 		vma = i915_vma_instance(obj, vm, NULL);
+ 		if (IS_ERR(vma)) {
+@@ -1632,8 +1634,10 @@ static int igt_ppgtt_pin_update(void *arg)
+ 	}
+ 
+ 	obj = i915_gem_object_create_internal(dev_priv, PAGE_SIZE);
+-	if (IS_ERR(obj))
+-		return PTR_ERR(obj);
++	if (IS_ERR(obj)) {
++		err = PTR_ERR(obj);
++		goto out_vm;
++	}
+ 
+ 	vma = i915_vma_instance(obj, vm, NULL);
+ 	if (IS_ERR(vma)) {
+diff --git a/drivers/gpu/drm/i915/gt/intel_timeline.c b/drivers/gpu/drm/i915/gt/intel_timeline.c
+index d8d9f1179c2b..eaa4d81b7436 100644
+--- a/drivers/gpu/drm/i915/gt/intel_timeline.c
++++ b/drivers/gpu/drm/i915/gt/intel_timeline.c
+@@ -519,6 +519,8 @@ int intel_timeline_read_hwsp(struct i915_request *from,
+ 
+ 	rcu_read_lock();
+ 	cl = rcu_dereference(from->hwsp_cacheline);
++	if (i915_request_completed(from)) /* confirm cacheline is valid */
++		goto unlock;
+ 	if (unlikely(!i915_active_acquire_if_busy(&cl->active)))
+ 		goto unlock; /* seqno wrapped and completed! */
+ 	if (unlikely(i915_request_completed(from)))
+diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
+index afc6aad9bf8c..c6f02b0b6c7a 100644
+--- a/drivers/gpu/drm/i915/i915_irq.c
++++ b/drivers/gpu/drm/i915/i915_irq.c
+@@ -3321,7 +3321,8 @@ static void gen8_de_irq_postinstall(struct drm_i915_private *dev_priv)
+ {
+ 	struct intel_uncore *uncore = &dev_priv->uncore;
+ 
+-	u32 de_pipe_masked = GEN8_PIPE_CDCLK_CRC_DONE;
++	u32 de_pipe_masked = gen8_de_pipe_fault_mask(dev_priv) |
++		GEN8_PIPE_CDCLK_CRC_DONE;
+ 	u32 de_pipe_enables;
+ 	u32 de_port_masked = GEN8_AUX_CHANNEL_A;
+ 	u32 de_port_enables;
+@@ -3332,13 +3333,10 @@ static void gen8_de_irq_postinstall(struct drm_i915_private *dev_priv)
+ 		de_misc_masked |= GEN8_DE_MISC_GSE;
+ 
+ 	if (INTEL_GEN(dev_priv) >= 9) {
+-		de_pipe_masked |= GEN9_DE_PIPE_IRQ_FAULT_ERRORS;
+ 		de_port_masked |= GEN9_AUX_CHANNEL_B | GEN9_AUX_CHANNEL_C |
+ 				  GEN9_AUX_CHANNEL_D;
+ 		if (IS_GEN9_LP(dev_priv))
+ 			de_port_masked |= BXT_DE_PORT_GMBUS;
+-	} else {
+-		de_pipe_masked |= GEN8_DE_PIPE_IRQ_FAULT_ERRORS;
+ 	}
+ 
+ 	if (INTEL_GEN(dev_priv) >= 11)
+diff --git a/drivers/gpu/drm/i915/i915_vma.c b/drivers/gpu/drm/i915/i915_vma.c
+index 4ff380770b32..1aee3efb4505 100644
+--- a/drivers/gpu/drm/i915/i915_vma.c
++++ b/drivers/gpu/drm/i915/i915_vma.c
+@@ -158,16 +158,18 @@ vma_create(struct drm_i915_gem_object *obj,
+ 
+ 	GEM_BUG_ON(!IS_ALIGNED(vma->size, I915_GTT_PAGE_SIZE));
+ 
++	spin_lock(&obj->vma.lock);
++
+ 	if (i915_is_ggtt(vm)) {
+ 		if (unlikely(overflows_type(vma->size, u32)))
+-			goto err_vma;
++			goto err_unlock;
+ 
+ 		vma->fence_size = i915_gem_fence_size(vm->i915, vma->size,
+ 						      i915_gem_object_get_tiling(obj),
+ 						      i915_gem_object_get_stride(obj));
+ 		if (unlikely(vma->fence_size < vma->size || /* overflow */
+ 			     vma->fence_size > vm->total))
+-			goto err_vma;
++			goto err_unlock;
+ 
+ 		GEM_BUG_ON(!IS_ALIGNED(vma->fence_size, I915_GTT_MIN_ALIGNMENT));
+ 
+@@ -179,8 +181,6 @@ vma_create(struct drm_i915_gem_object *obj,
+ 		__set_bit(I915_VMA_GGTT_BIT, __i915_vma_flags(vma));
+ 	}
+ 
+-	spin_lock(&obj->vma.lock);
+-
+ 	rb = NULL;
+ 	p = &obj->vma.tree.rb_node;
+ 	while (*p) {
+@@ -225,6 +225,8 @@ vma_create(struct drm_i915_gem_object *obj,
+ 
+ 	return vma;
+ 
++err_unlock:
++	spin_unlock(&obj->vma.lock);
+ err_vma:
+ 	i915_vma_free(vma);
+ 	return ERR_PTR(-E2BIG);
+diff --git a/drivers/gpu/drm/qxl/qxl_cmd.c b/drivers/gpu/drm/qxl/qxl_cmd.c
+index ef09dc6bc635..d082c194cccc 100644
+--- a/drivers/gpu/drm/qxl/qxl_cmd.c
++++ b/drivers/gpu/drm/qxl/qxl_cmd.c
+@@ -480,9 +480,10 @@ int qxl_hw_surface_alloc(struct qxl_device *qdev,
+ 		return ret;
+ 
+ 	ret = qxl_release_reserve_list(release, true);
+-	if (ret)
++	if (ret) {
++		qxl_release_free(qdev, release);
+ 		return ret;
+-
++	}
+ 	cmd = (struct qxl_surface_cmd *)qxl_release_map(qdev, release);
+ 	cmd->type = QXL_SURFACE_CMD_CREATE;
+ 	cmd->flags = QXL_SURF_FLAG_KEEP_DATA;
+@@ -499,8 +500,8 @@ int qxl_hw_surface_alloc(struct qxl_device *qdev,
+ 	/* no need to add a release to the fence for this surface bo,
+ 	   since it is only released when we ask to destroy the surface
+ 	   and it would never signal otherwise */
+-	qxl_push_command_ring_release(qdev, release, QXL_CMD_SURFACE, false);
+ 	qxl_release_fence_buffer_objects(release);
++	qxl_push_command_ring_release(qdev, release, QXL_CMD_SURFACE, false);
+ 
+ 	surf->hw_surf_alloc = true;
+ 	spin_lock(&qdev->surf_id_idr_lock);
+@@ -542,9 +543,8 @@ int qxl_hw_surface_dealloc(struct qxl_device *qdev,
+ 	cmd->surface_id = id;
+ 	qxl_release_unmap(qdev, release, &cmd->release_info);
+ 
+-	qxl_push_command_ring_release(qdev, release, QXL_CMD_SURFACE, false);
+-
+ 	qxl_release_fence_buffer_objects(release);
++	qxl_push_command_ring_release(qdev, release, QXL_CMD_SURFACE, false);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/qxl/qxl_display.c b/drivers/gpu/drm/qxl/qxl_display.c
+index 16d73b22f3f5..92d84280096e 100644
+--- a/drivers/gpu/drm/qxl/qxl_display.c
++++ b/drivers/gpu/drm/qxl/qxl_display.c
+@@ -523,8 +523,8 @@ static int qxl_primary_apply_cursor(struct drm_plane *plane)
+ 	cmd->u.set.visible = 1;
+ 	qxl_release_unmap(qdev, release, &cmd->release_info);
+ 
+-	qxl_push_cursor_ring_release(qdev, release, QXL_CMD_CURSOR, false);
+ 	qxl_release_fence_buffer_objects(release);
++	qxl_push_cursor_ring_release(qdev, release, QXL_CMD_CURSOR, false);
+ 
+ 	return ret;
+ 
+@@ -665,8 +665,8 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane,
+ 	cmd->u.position.y = plane->state->crtc_y + fb->hot_y;
+ 
+ 	qxl_release_unmap(qdev, release, &cmd->release_info);
+-	qxl_push_cursor_ring_release(qdev, release, QXL_CMD_CURSOR, false);
+ 	qxl_release_fence_buffer_objects(release);
++	qxl_push_cursor_ring_release(qdev, release, QXL_CMD_CURSOR, false);
+ 
+ 	if (old_cursor_bo != NULL)
+ 		qxl_bo_unpin(old_cursor_bo);
+@@ -713,8 +713,8 @@ static void qxl_cursor_atomic_disable(struct drm_plane *plane,
+ 	cmd->type = QXL_CURSOR_HIDE;
+ 	qxl_release_unmap(qdev, release, &cmd->release_info);
+ 
+-	qxl_push_cursor_ring_release(qdev, release, QXL_CMD_CURSOR, false);
+ 	qxl_release_fence_buffer_objects(release);
++	qxl_push_cursor_ring_release(qdev, release, QXL_CMD_CURSOR, false);
+ }
+ 
+ static void qxl_update_dumb_head(struct qxl_device *qdev,
+diff --git a/drivers/gpu/drm/qxl/qxl_draw.c b/drivers/gpu/drm/qxl/qxl_draw.c
+index 5bebf1ea1c5d..3599db096973 100644
+--- a/drivers/gpu/drm/qxl/qxl_draw.c
++++ b/drivers/gpu/drm/qxl/qxl_draw.c
+@@ -209,9 +209,10 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev,
+ 		goto out_release_backoff;
+ 
+ 	rects = drawable_set_clipping(qdev, num_clips, clips_bo);
+-	if (!rects)
++	if (!rects) {
++		ret = -EINVAL;
+ 		goto out_release_backoff;
+-
++	}
+ 	drawable = (struct qxl_drawable *)qxl_release_map(qdev, release);
+ 
+ 	drawable->clip.type = SPICE_CLIP_TYPE_RECTS;
+@@ -242,8 +243,8 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev,
+ 	}
+ 	qxl_bo_kunmap(clips_bo);
+ 
+-	qxl_push_command_ring_release(qdev, release, QXL_CMD_DRAW, false);
+ 	qxl_release_fence_buffer_objects(release);
++	qxl_push_command_ring_release(qdev, release, QXL_CMD_DRAW, false);
+ 
+ out_release_backoff:
+ 	if (ret)
+diff --git a/drivers/gpu/drm/qxl/qxl_ioctl.c b/drivers/gpu/drm/qxl/qxl_ioctl.c
+index 8117a45b3610..72f3f1bbb40c 100644
+--- a/drivers/gpu/drm/qxl/qxl_ioctl.c
++++ b/drivers/gpu/drm/qxl/qxl_ioctl.c
+@@ -261,11 +261,8 @@ static int qxl_process_single_command(struct qxl_device *qdev,
+ 			apply_surf_reloc(qdev, &reloc_info[i]);
+ 	}
+ 
++	qxl_release_fence_buffer_objects(release);
+ 	ret = qxl_push_command_ring_release(qdev, release, cmd->type, true);
+-	if (ret)
+-		qxl_release_backoff_reserve_list(release);
+-	else
+-		qxl_release_fence_buffer_objects(release);
+ 
+ out_free_bos:
+ out_free_release:
+diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
+index 60c4c6a1aac6..75737ec59614 100644
+--- a/drivers/gpu/drm/scheduler/sched_main.c
++++ b/drivers/gpu/drm/scheduler/sched_main.c
+@@ -687,7 +687,7 @@ drm_sched_get_cleanup_job(struct drm_gpu_scheduler *sched)
+ 	 */
+ 	if ((sched->timeout != MAX_SCHEDULE_TIMEOUT &&
+ 	    !cancel_delayed_work(&sched->work_tdr)) ||
+-	    __kthread_should_park(sched->thread))
++	    kthread_should_park())
+ 		return NULL;
+ 
+ 	spin_lock_irqsave(&sched->job_list_lock, flags);
+diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
+index a68bce4d0ddb..e06c6b9555cf 100644
+--- a/drivers/hv/vmbus_drv.c
++++ b/drivers/hv/vmbus_drv.c
+@@ -978,6 +978,9 @@ static int vmbus_resume(struct device *child_device)
+ 
+ 	return drv->resume(dev);
+ }
++#else
++#define vmbus_suspend NULL
++#define vmbus_resume NULL
+ #endif /* CONFIG_PM_SLEEP */
+ 
+ /*
+@@ -997,11 +1000,22 @@ static void vmbus_device_release(struct device *device)
+ }
+ 
+ /*
+- * Note: we must use SET_NOIRQ_SYSTEM_SLEEP_PM_OPS rather than
+- * SET_SYSTEM_SLEEP_PM_OPS: see the comment before vmbus_bus_pm.
++ * Note: we must use the "noirq" ops: see the comment before vmbus_bus_pm.
++ *
++ * suspend_noirq/resume_noirq are set to NULL to support Suspend-to-Idle: we
++ * shouldn't suspend the vmbus devices upon Suspend-to-Idle, otherwise there
++ * is no way to wake up a Generation-2 VM.
++ *
++ * The other 4 ops are for hibernation.
+  */
++
+ static const struct dev_pm_ops vmbus_pm = {
+-	SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(vmbus_suspend, vmbus_resume)
++	.suspend_noirq	= NULL,
++	.resume_noirq	= NULL,
++	.freeze_noirq	= vmbus_suspend,
++	.thaw_noirq	= vmbus_resume,
++	.poweroff_noirq	= vmbus_suspend,
++	.restore_noirq	= vmbus_resume,
+ };
+ 
+ /* The one and only one */
+@@ -2281,6 +2295,9 @@ static int vmbus_bus_resume(struct device *dev)
+ 
+ 	return 0;
+ }
++#else
++#define vmbus_bus_suspend NULL
++#define vmbus_bus_resume NULL
+ #endif /* CONFIG_PM_SLEEP */
+ 
+ static const struct acpi_device_id vmbus_acpi_device_ids[] = {
+@@ -2291,16 +2308,24 @@ static const struct acpi_device_id vmbus_acpi_device_ids[] = {
+ MODULE_DEVICE_TABLE(acpi, vmbus_acpi_device_ids);
+ 
+ /*
+- * Note: we must use SET_NOIRQ_SYSTEM_SLEEP_PM_OPS rather than
+- * SET_SYSTEM_SLEEP_PM_OPS, otherwise NIC SR-IOV can not work, because the
+- * "pci_dev_pm_ops" uses the "noirq" callbacks: in the resume path, the
+- * pci "noirq" restore callback runs before "non-noirq" callbacks (see
++ * Note: we must use the "no_irq" ops, otherwise hibernation can not work with
++ * PCI device assignment, because "pci_dev_pm_ops" uses the "noirq" ops: in
++ * the resume path, the pci "noirq" restore op runs before "non-noirq" op (see
+  * resume_target_kernel() -> dpm_resume_start(), and hibernation_restore() ->
+  * dpm_resume_end()). This means vmbus_bus_resume() and the pci-hyperv's
+- * resume callback must also run via the "noirq" callbacks.
++ * resume callback must also run via the "noirq" ops.
++ *
++ * Set suspend_noirq/resume_noirq to NULL for Suspend-to-Idle: see the comment
++ * earlier in this file before vmbus_pm.
+  */
++
+ static const struct dev_pm_ops vmbus_bus_pm = {
+-	SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(vmbus_bus_suspend, vmbus_bus_resume)
++	.suspend_noirq	= NULL,
++	.resume_noirq	= NULL,
++	.freeze_noirq	= vmbus_bus_suspend,
++	.thaw_noirq	= vmbus_bus_resume,
++	.poweroff_noirq	= vmbus_bus_suspend,
++	.restore_noirq	= vmbus_bus_resume
+ };
+ 
+ static struct acpi_driver vmbus_acpi_driver = {
+diff --git a/drivers/i2c/busses/i2c-amd-mp2-pci.c b/drivers/i2c/busses/i2c-amd-mp2-pci.c
+index 5e4800d72e00..cd3fd5ee5f65 100644
+--- a/drivers/i2c/busses/i2c-amd-mp2-pci.c
++++ b/drivers/i2c/busses/i2c-amd-mp2-pci.c
+@@ -349,12 +349,12 @@ static int amd_mp2_pci_probe(struct pci_dev *pci_dev,
+ 	if (!privdata)
+ 		return -ENOMEM;
+ 
++	privdata->pci_dev = pci_dev;
+ 	rc = amd_mp2_pci_init(privdata, pci_dev);
+ 	if (rc)
+ 		return rc;
+ 
+ 	mutex_init(&privdata->c2p_lock);
+-	privdata->pci_dev = pci_dev;
+ 
+ 	pm_runtime_set_autosuspend_delay(&pci_dev->dev, 1000);
+ 	pm_runtime_use_autosuspend(&pci_dev->dev);
+diff --git a/drivers/i2c/busses/i2c-aspeed.c b/drivers/i2c/busses/i2c-aspeed.c
+index a7be6f24450b..538dfc4110f8 100644
+--- a/drivers/i2c/busses/i2c-aspeed.c
++++ b/drivers/i2c/busses/i2c-aspeed.c
+@@ -603,6 +603,7 @@ static irqreturn_t aspeed_i2c_bus_irq(int irq, void *dev_id)
+ 	/* Ack all interrupts except for Rx done */
+ 	writel(irq_received & ~ASPEED_I2CD_INTR_RX_DONE,
+ 	       bus->base + ASPEED_I2C_INTR_STS_REG);
++	readl(bus->base + ASPEED_I2C_INTR_STS_REG);
+ 	irq_remaining = irq_received;
+ 
+ #if IS_ENABLED(CONFIG_I2C_SLAVE)
+@@ -645,9 +646,11 @@ static irqreturn_t aspeed_i2c_bus_irq(int irq, void *dev_id)
+ 			irq_received, irq_handled);
+ 
+ 	/* Ack Rx done */
+-	if (irq_received & ASPEED_I2CD_INTR_RX_DONE)
++	if (irq_received & ASPEED_I2CD_INTR_RX_DONE) {
+ 		writel(ASPEED_I2CD_INTR_RX_DONE,
+ 		       bus->base + ASPEED_I2C_INTR_STS_REG);
++		readl(bus->base + ASPEED_I2C_INTR_STS_REG);
++	}
+ 	spin_unlock(&bus->lock);
+ 	return irq_remaining ? IRQ_NONE : IRQ_HANDLED;
+ }
+diff --git a/drivers/i2c/busses/i2c-bcm-iproc.c b/drivers/i2c/busses/i2c-bcm-iproc.c
+index 30efb7913b2e..b58224b7ba79 100644
+--- a/drivers/i2c/busses/i2c-bcm-iproc.c
++++ b/drivers/i2c/busses/i2c-bcm-iproc.c
+@@ -360,6 +360,9 @@ static bool bcm_iproc_i2c_slave_isr(struct bcm_iproc_i2c_dev *iproc_i2c,
+ 			value = (u8)((val >> S_RX_DATA_SHIFT) & S_RX_DATA_MASK);
+ 			i2c_slave_event(iproc_i2c->slave,
+ 					I2C_SLAVE_WRITE_RECEIVED, &value);
++			if (rx_status == I2C_SLAVE_RX_END)
++				i2c_slave_event(iproc_i2c->slave,
++						I2C_SLAVE_STOP, &value);
+ 		}
+ 	} else if (status & BIT(IS_S_TX_UNDERRUN_SHIFT)) {
+ 		/* Master read other than start */
+diff --git a/drivers/infiniband/core/cm.c b/drivers/infiniband/core/cm.c
+index 15e99a888427..a133f9e2735e 100644
+--- a/drivers/infiniband/core/cm.c
++++ b/drivers/infiniband/core/cm.c
+@@ -572,18 +572,6 @@ static int cm_init_av_by_path(struct sa_path_rec *path,
+ 	return 0;
+ }
+ 
+-static int cm_alloc_id(struct cm_id_private *cm_id_priv)
+-{
+-	int err;
+-	u32 id;
+-
+-	err = xa_alloc_cyclic_irq(&cm.local_id_table, &id, cm_id_priv,
+-			xa_limit_32b, &cm.local_id_next, GFP_KERNEL);
+-
+-	cm_id_priv->id.local_id = (__force __be32)id ^ cm.random_id_operand;
+-	return err;
+-}
+-
+ static u32 cm_local_id(__be32 local_id)
+ {
+ 	return (__force u32) (local_id ^ cm.random_id_operand);
+@@ -825,6 +813,7 @@ struct ib_cm_id *ib_create_cm_id(struct ib_device *device,
+ 				 void *context)
+ {
+ 	struct cm_id_private *cm_id_priv;
++	u32 id;
+ 	int ret;
+ 
+ 	cm_id_priv = kzalloc(sizeof *cm_id_priv, GFP_KERNEL);
+@@ -836,9 +825,6 @@ struct ib_cm_id *ib_create_cm_id(struct ib_device *device,
+ 	cm_id_priv->id.cm_handler = cm_handler;
+ 	cm_id_priv->id.context = context;
+ 	cm_id_priv->id.remote_cm_qpn = 1;
+-	ret = cm_alloc_id(cm_id_priv);
+-	if (ret)
+-		goto error;
+ 
+ 	spin_lock_init(&cm_id_priv->lock);
+ 	init_completion(&cm_id_priv->comp);
+@@ -847,11 +833,20 @@ struct ib_cm_id *ib_create_cm_id(struct ib_device *device,
+ 	INIT_LIST_HEAD(&cm_id_priv->altr_list);
+ 	atomic_set(&cm_id_priv->work_count, -1);
+ 	refcount_set(&cm_id_priv->refcount, 1);
++
++	ret = xa_alloc_cyclic_irq(&cm.local_id_table, &id, NULL, xa_limit_32b,
++				  &cm.local_id_next, GFP_KERNEL);
++	if (ret < 0)
++		goto error;
++	cm_id_priv->id.local_id = (__force __be32)id ^ cm.random_id_operand;
++	xa_store_irq(&cm.local_id_table, cm_local_id(cm_id_priv->id.local_id),
++		     cm_id_priv, GFP_KERNEL);
++
+ 	return &cm_id_priv->id;
+ 
+ error:
+ 	kfree(cm_id_priv);
+-	return ERR_PTR(-ENOMEM);
++	return ERR_PTR(ret);
+ }
+ EXPORT_SYMBOL(ib_create_cm_id);
+ 
+diff --git a/drivers/infiniband/core/rdma_core.c b/drivers/infiniband/core/rdma_core.c
+index 5128cb16bb48..177333d8bcda 100644
+--- a/drivers/infiniband/core/rdma_core.c
++++ b/drivers/infiniband/core/rdma_core.c
+@@ -360,7 +360,7 @@ lookup_get_fd_uobject(const struct uverbs_api_object *obj,
+ 	 * uverbs_uobject_fd_release(), and the caller is expected to ensure
+ 	 * that release is never done while a call to lookup is possible.
+ 	 */
+-	if (f->f_op != fd_type->fops) {
++	if (f->f_op != fd_type->fops || uobject->ufile != ufile) {
+ 		fput(f);
+ 		return ERR_PTR(-EBADF);
+ 	}
+@@ -474,16 +474,15 @@ alloc_begin_fd_uobject(const struct uverbs_api_object *obj,
+ 	filp = anon_inode_getfile(fd_type->name, fd_type->fops, NULL,
+ 				  fd_type->flags);
+ 	if (IS_ERR(filp)) {
++		uverbs_uobject_put(uobj);
+ 		uobj = ERR_CAST(filp);
+-		goto err_uobj;
++		goto err_fd;
+ 	}
+ 	uobj->object = filp;
+ 
+ 	uobj->id = new_fd;
+ 	return uobj;
+ 
+-err_uobj:
+-	uverbs_uobject_put(uobj);
+ err_fd:
+ 	put_unused_fd(new_fd);
+ 	return uobj;
+@@ -679,7 +678,6 @@ void rdma_lookup_put_uobject(struct ib_uobject *uobj,
+ 			     enum rdma_lookup_mode mode)
+ {
+ 	assert_uverbs_usecnt(uobj, mode);
+-	uobj->uapi_object->type_class->lookup_put(uobj, mode);
+ 	/*
+ 	 * In order to unlock an object, either decrease its usecnt for
+ 	 * read access or zero it in case of exclusive access. See
+@@ -696,6 +694,7 @@ void rdma_lookup_put_uobject(struct ib_uobject *uobj,
+ 		break;
+ 	}
+ 
++	uobj->uapi_object->type_class->lookup_put(uobj, mode);
+ 	/* Pairs with the kref obtained by type->lookup_get */
+ 	uverbs_uobject_put(uobj);
+ }
+diff --git a/drivers/infiniband/core/uverbs_main.c b/drivers/infiniband/core/uverbs_main.c
+index 2d4083bf4a04..17fc25db0311 100644
+--- a/drivers/infiniband/core/uverbs_main.c
++++ b/drivers/infiniband/core/uverbs_main.c
+@@ -820,6 +820,10 @@ void uverbs_user_mmap_disassociate(struct ib_uverbs_file *ufile)
+ 			ret = mmget_not_zero(mm);
+ 			if (!ret) {
+ 				list_del_init(&priv->list);
++				if (priv->entry) {
++					rdma_user_mmap_entry_put(priv->entry);
++					priv->entry = NULL;
++				}
+ 				mm = NULL;
+ 				continue;
+ 			}
+diff --git a/drivers/infiniband/hw/mlx4/main.c b/drivers/infiniband/hw/mlx4/main.c
+index 2f5d9b181848..e5758eb0b7d2 100644
+--- a/drivers/infiniband/hw/mlx4/main.c
++++ b/drivers/infiniband/hw/mlx4/main.c
+@@ -1502,8 +1502,9 @@ static int __mlx4_ib_create_default_rules(
+ 	int i;
+ 
+ 	for (i = 0; i < ARRAY_SIZE(pdefault_rules->rules_create_list); i++) {
++		union ib_flow_spec ib_spec = {};
+ 		int ret;
+-		union ib_flow_spec ib_spec;
++
+ 		switch (pdefault_rules->rules_create_list[i]) {
+ 		case 0:
+ 			/* no rule */
+diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
+index 8fe149e808af..245fef36ab4c 100644
+--- a/drivers/infiniband/hw/mlx5/qp.c
++++ b/drivers/infiniband/hw/mlx5/qp.c
+@@ -5545,7 +5545,9 @@ static void to_rdma_ah_attr(struct mlx5_ib_dev *ibdev,
+ 	rdma_ah_set_path_bits(ah_attr, path->grh_mlid & 0x7f);
+ 	rdma_ah_set_static_rate(ah_attr,
+ 				path->static_rate ? path->static_rate - 5 : 0);
+-	if (path->grh_mlid & (1 << 7)) {
++
++	if (path->grh_mlid & (1 << 7) ||
++	    ah_attr->type == RDMA_AH_ATTR_TYPE_ROCE) {
+ 		u32 tc_fl = be32_to_cpu(path->tclass_flowlabel);
+ 
+ 		rdma_ah_set_grh(ah_attr, NULL,
+diff --git a/drivers/infiniband/sw/rdmavt/cq.c b/drivers/infiniband/sw/rdmavt/cq.c
+index 5724cbbe38b1..04d2e72017fe 100644
+--- a/drivers/infiniband/sw/rdmavt/cq.c
++++ b/drivers/infiniband/sw/rdmavt/cq.c
+@@ -248,8 +248,8 @@ int rvt_create_cq(struct ib_cq *ibcq, const struct ib_cq_init_attr *attr,
+ 	 */
+ 	if (udata && udata->outlen >= sizeof(__u64)) {
+ 		cq->ip = rvt_create_mmap_info(rdi, sz, udata, u_wc);
+-		if (!cq->ip) {
+-			err = -ENOMEM;
++		if (IS_ERR(cq->ip)) {
++			err = PTR_ERR(cq->ip);
+ 			goto bail_wc;
+ 		}
+ 
+diff --git a/drivers/infiniband/sw/rdmavt/mmap.c b/drivers/infiniband/sw/rdmavt/mmap.c
+index 652f4a7efc1b..37853aa3bcf7 100644
+--- a/drivers/infiniband/sw/rdmavt/mmap.c
++++ b/drivers/infiniband/sw/rdmavt/mmap.c
+@@ -154,7 +154,7 @@ done:
+  * @udata: user data (must be valid!)
+  * @obj: opaque pointer to a cq, wq etc
+  *
+- * Return: rvt_mmap struct on success
++ * Return: rvt_mmap struct on success, ERR_PTR on failure
+  */
+ struct rvt_mmap_info *rvt_create_mmap_info(struct rvt_dev_info *rdi, u32 size,
+ 					   struct ib_udata *udata, void *obj)
+@@ -166,7 +166,7 @@ struct rvt_mmap_info *rvt_create_mmap_info(struct rvt_dev_info *rdi, u32 size,
+ 
+ 	ip = kmalloc_node(sizeof(*ip), GFP_KERNEL, rdi->dparms.node);
+ 	if (!ip)
+-		return ip;
++		return ERR_PTR(-ENOMEM);
+ 
+ 	size = PAGE_ALIGN(size);
+ 
+diff --git a/drivers/infiniband/sw/rdmavt/qp.c b/drivers/infiniband/sw/rdmavt/qp.c
+index 7858d499db03..2c702e1b9a2c 100644
+--- a/drivers/infiniband/sw/rdmavt/qp.c
++++ b/drivers/infiniband/sw/rdmavt/qp.c
+@@ -1244,8 +1244,8 @@ struct ib_qp *rvt_create_qp(struct ib_pd *ibpd,
+ 
+ 			qp->ip = rvt_create_mmap_info(rdi, s, udata,
+ 						      qp->r_rq.wq);
+-			if (!qp->ip) {
+-				ret = ERR_PTR(-ENOMEM);
++			if (IS_ERR(qp->ip)) {
++				ret = ERR_CAST(qp->ip);
+ 				goto bail_qpn;
+ 			}
+ 
+diff --git a/drivers/infiniband/sw/rdmavt/srq.c b/drivers/infiniband/sw/rdmavt/srq.c
+index 24fef021d51d..f547c115af03 100644
+--- a/drivers/infiniband/sw/rdmavt/srq.c
++++ b/drivers/infiniband/sw/rdmavt/srq.c
+@@ -111,8 +111,8 @@ int rvt_create_srq(struct ib_srq *ibsrq, struct ib_srq_init_attr *srq_init_attr,
+ 		u32 s = sizeof(struct rvt_rwq) + srq->rq.size * sz;
+ 
+ 		srq->ip = rvt_create_mmap_info(dev, s, udata, srq->rq.wq);
+-		if (!srq->ip) {
+-			ret = -ENOMEM;
++		if (IS_ERR(srq->ip)) {
++			ret = PTR_ERR(srq->ip);
+ 			goto bail_wq;
+ 		}
+ 
+diff --git a/drivers/infiniband/sw/siw/siw_qp_tx.c b/drivers/infiniband/sw/siw/siw_qp_tx.c
+index ae92c8080967..9f53aa4feb87 100644
+--- a/drivers/infiniband/sw/siw/siw_qp_tx.c
++++ b/drivers/infiniband/sw/siw/siw_qp_tx.c
+@@ -920,20 +920,27 @@ static int siw_fastreg_mr(struct ib_pd *pd, struct siw_sqe *sqe)
+ {
+ 	struct ib_mr *base_mr = (struct ib_mr *)(uintptr_t)sqe->base_mr;
+ 	struct siw_device *sdev = to_siw_dev(pd->device);
+-	struct siw_mem *mem = siw_mem_id2obj(sdev, sqe->rkey  >> 8);
++	struct siw_mem *mem;
+ 	int rv = 0;
+ 
+ 	siw_dbg_pd(pd, "STag 0x%08x\n", sqe->rkey);
+ 
+-	if (unlikely(!mem || !base_mr)) {
++	if (unlikely(!base_mr)) {
+ 		pr_warn("siw: fastreg: STag 0x%08x unknown\n", sqe->rkey);
+ 		return -EINVAL;
+ 	}
++
+ 	if (unlikely(base_mr->rkey >> 8 != sqe->rkey  >> 8)) {
+ 		pr_warn("siw: fastreg: STag 0x%08x: bad MR\n", sqe->rkey);
+-		rv = -EINVAL;
+-		goto out;
++		return -EINVAL;
+ 	}
++
++	mem = siw_mem_id2obj(sdev, sqe->rkey  >> 8);
++	if (unlikely(!mem)) {
++		pr_warn("siw: fastreg: STag 0x%08x unknown\n", sqe->rkey);
++		return -EINVAL;
++	}
++
+ 	if (unlikely(mem->pd != pd)) {
+ 		pr_warn("siw: fastreg: PD mismatch\n");
+ 		rv = -EINVAL;
+diff --git a/drivers/iommu/amd_iommu_init.c b/drivers/iommu/amd_iommu_init.c
+index 6be3853a5d97..2b9a67ecc6ac 100644
+--- a/drivers/iommu/amd_iommu_init.c
++++ b/drivers/iommu/amd_iommu_init.c
+@@ -2936,7 +2936,7 @@ static int __init parse_amd_iommu_intr(char *str)
+ {
+ 	for (; *str; ++str) {
+ 		if (strncmp(str, "legacy", 6) == 0) {
+-			amd_iommu_guest_ir = AMD_IOMMU_GUEST_IR_LEGACY;
++			amd_iommu_guest_ir = AMD_IOMMU_GUEST_IR_LEGACY_GA;
+ 			break;
+ 		}
+ 		if (strncmp(str, "vapic", 5) == 0) {
+diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
+index ef0a5246700e..0182cff2c7ac 100644
+--- a/drivers/iommu/intel-iommu.c
++++ b/drivers/iommu/intel-iommu.c
+@@ -371,11 +371,11 @@ int dmar_disabled = 0;
+ int dmar_disabled = 1;
+ #endif /* CONFIG_INTEL_IOMMU_DEFAULT_ON */
+ 
+-#ifdef INTEL_IOMMU_SCALABLE_MODE_DEFAULT_ON
++#ifdef CONFIG_INTEL_IOMMU_SCALABLE_MODE_DEFAULT_ON
+ int intel_iommu_sm = 1;
+ #else
+ int intel_iommu_sm;
+-#endif /* INTEL_IOMMU_SCALABLE_MODE_DEFAULT_ON */
++#endif /* CONFIG_INTEL_IOMMU_SCALABLE_MODE_DEFAULT_ON */
+ 
+ int intel_iommu_enabled = 0;
+ EXPORT_SYMBOL_GPL(intel_iommu_enabled);
+diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
+index 3e3528436e0b..8d2477941fd9 100644
+--- a/drivers/iommu/iommu.c
++++ b/drivers/iommu/iommu.c
+@@ -1428,7 +1428,7 @@ struct iommu_group *iommu_group_get_for_dev(struct device *dev)
+ 
+ 	return group;
+ }
+-EXPORT_SYMBOL(iommu_group_get_for_dev);
++EXPORT_SYMBOL_GPL(iommu_group_get_for_dev);
+ 
+ struct iommu_domain *iommu_group_default_domain(struct iommu_group *group)
+ {
+diff --git a/drivers/iommu/qcom_iommu.c b/drivers/iommu/qcom_iommu.c
+index 4328da0b0a9f..b160cf140e16 100644
+--- a/drivers/iommu/qcom_iommu.c
++++ b/drivers/iommu/qcom_iommu.c
+@@ -813,8 +813,11 @@ static int qcom_iommu_device_probe(struct platform_device *pdev)
+ 	qcom_iommu->dev = dev;
+ 
+ 	res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+-	if (res)
++	if (res) {
+ 		qcom_iommu->local_base = devm_ioremap_resource(dev, res);
++		if (IS_ERR(qcom_iommu->local_base))
++			return PTR_ERR(qcom_iommu->local_base);
++	}
+ 
+ 	qcom_iommu->iface_clk = devm_clk_get(dev, "iface");
+ 	if (IS_ERR(qcom_iommu->iface_clk)) {
+diff --git a/drivers/md/dm-mpath.c b/drivers/md/dm-mpath.c
+index 58fd137b6ae1..3e500098132f 100644
+--- a/drivers/md/dm-mpath.c
++++ b/drivers/md/dm-mpath.c
+@@ -585,10 +585,12 @@ static struct pgpath *__map_bio(struct multipath *m, struct bio *bio)
+ 
+ 	/* Do we need to select a new pgpath? */
+ 	pgpath = READ_ONCE(m->current_pgpath);
+-	queue_io = test_bit(MPATHF_QUEUE_IO, &m->flags);
+-	if (!pgpath || !queue_io)
++	if (!pgpath || !test_bit(MPATHF_QUEUE_IO, &m->flags))
+ 		pgpath = choose_pgpath(m, bio->bi_iter.bi_size);
+ 
++	/* MPATHF_QUEUE_IO might have been cleared by choose_pgpath. */
++	queue_io = test_bit(MPATHF_QUEUE_IO, &m->flags);
++
+ 	if ((pgpath && queue_io) ||
+ 	    (!pgpath && test_bit(MPATHF_QUEUE_IF_NO_PATH, &m->flags))) {
+ 		/* Queue for the daemon to resubmit */
+diff --git a/drivers/md/dm-verity-fec.c b/drivers/md/dm-verity-fec.c
+index 49147e634046..fb41b4f23c48 100644
+--- a/drivers/md/dm-verity-fec.c
++++ b/drivers/md/dm-verity-fec.c
+@@ -435,7 +435,7 @@ int verity_fec_decode(struct dm_verity *v, struct dm_verity_io *io,
+ 	fio->level++;
+ 
+ 	if (type == DM_VERITY_BLOCK_TYPE_METADATA)
+-		block += v->data_blocks;
++		block = block - v->hash_start + v->data_blocks;
+ 
+ 	/*
+ 	 * For RS(M, N), the continuous FEC data is divided into blocks of N
+diff --git a/drivers/md/dm-writecache.c b/drivers/md/dm-writecache.c
+index d3b17a654917..a3b3c6b2e61b 100644
+--- a/drivers/md/dm-writecache.c
++++ b/drivers/md/dm-writecache.c
+@@ -882,6 +882,24 @@ static int writecache_alloc_entries(struct dm_writecache *wc)
+ 	return 0;
+ }
+ 
++static int writecache_read_metadata(struct dm_writecache *wc, sector_t n_sectors)
++{
++	struct dm_io_region region;
++	struct dm_io_request req;
++
++	region.bdev = wc->ssd_dev->bdev;
++	region.sector = wc->start_sector;
++	region.count = n_sectors;
++	req.bi_op = REQ_OP_READ;
++	req.bi_op_flags = REQ_SYNC;
++	req.mem.type = DM_IO_VMA;
++	req.mem.ptr.vma = (char *)wc->memory_map;
++	req.client = wc->dm_io;
++	req.notify.fn = NULL;
++
++	return dm_io(&req, 1, &region, NULL);
++}
++
+ static void writecache_resume(struct dm_target *ti)
+ {
+ 	struct dm_writecache *wc = ti->private;
+@@ -892,8 +910,18 @@ static void writecache_resume(struct dm_target *ti)
+ 
+ 	wc_lock(wc);
+ 
+-	if (WC_MODE_PMEM(wc))
++	if (WC_MODE_PMEM(wc)) {
+ 		persistent_memory_invalidate_cache(wc->memory_map, wc->memory_map_size);
++	} else {
++		r = writecache_read_metadata(wc, wc->metadata_sectors);
++		if (r) {
++			size_t sb_entries_offset;
++			writecache_error(wc, r, "unable to read metadata: %d", r);
++			sb_entries_offset = offsetof(struct wc_memory_superblock, entries);
++			memset((char *)wc->memory_map + sb_entries_offset, -1,
++			       (wc->metadata_sectors << SECTOR_SHIFT) - sb_entries_offset);
++		}
++	}
+ 
+ 	wc->tree = RB_ROOT;
+ 	INIT_LIST_HEAD(&wc->lru);
+@@ -2005,6 +2033,12 @@ static int writecache_ctr(struct dm_target *ti, unsigned argc, char **argv)
+ 		ti->error = "Invalid block size";
+ 		goto bad;
+ 	}
++	if (wc->block_size < bdev_logical_block_size(wc->dev->bdev) ||
++	    wc->block_size < bdev_logical_block_size(wc->ssd_dev->bdev)) {
++		r = -EINVAL;
++		ti->error = "Block size is smaller than device logical block size";
++		goto bad;
++	}
+ 	wc->block_size_bits = __ffs(wc->block_size);
+ 
+ 	wc->max_writeback_jobs = MAX_WRITEBACK_JOBS;
+@@ -2093,8 +2127,6 @@ invalid_optional:
+ 			goto bad;
+ 		}
+ 	} else {
+-		struct dm_io_region region;
+-		struct dm_io_request req;
+ 		size_t n_blocks, n_metadata_blocks;
+ 		uint64_t n_bitmap_bits;
+ 
+@@ -2151,19 +2183,9 @@ invalid_optional:
+ 			goto bad;
+ 		}
+ 
+-		region.bdev = wc->ssd_dev->bdev;
+-		region.sector = wc->start_sector;
+-		region.count = wc->metadata_sectors;
+-		req.bi_op = REQ_OP_READ;
+-		req.bi_op_flags = REQ_SYNC;
+-		req.mem.type = DM_IO_VMA;
+-		req.mem.ptr.vma = (char *)wc->memory_map;
+-		req.client = wc->dm_io;
+-		req.notify.fn = NULL;
+-
+-		r = dm_io(&req, 1, &region, NULL);
++		r = writecache_read_metadata(wc, wc->block_size >> SECTOR_SHIFT);
+ 		if (r) {
+-			ti->error = "Unable to read metadata";
++			ti->error = "Unable to read first block of metadata";
+ 			goto bad;
+ 		}
+ 	}
+diff --git a/drivers/mmc/host/cqhci.c b/drivers/mmc/host/cqhci.c
+index 5047f7343ffc..c19f4c3f115a 100644
+--- a/drivers/mmc/host/cqhci.c
++++ b/drivers/mmc/host/cqhci.c
+@@ -5,6 +5,7 @@
+ #include <linux/delay.h>
+ #include <linux/highmem.h>
+ #include <linux/io.h>
++#include <linux/iopoll.h>
+ #include <linux/module.h>
+ #include <linux/dma-mapping.h>
+ #include <linux/slab.h>
+@@ -343,12 +344,16 @@ static int cqhci_enable(struct mmc_host *mmc, struct mmc_card *card)
+ /* CQHCI is idle and should halt immediately, so set a small timeout */
+ #define CQHCI_OFF_TIMEOUT 100
+ 
++static u32 cqhci_read_ctl(struct cqhci_host *cq_host)
++{
++	return cqhci_readl(cq_host, CQHCI_CTL);
++}
++
+ static void cqhci_off(struct mmc_host *mmc)
+ {
+ 	struct cqhci_host *cq_host = mmc->cqe_private;
+-	ktime_t timeout;
+-	bool timed_out;
+ 	u32 reg;
++	int err;
+ 
+ 	if (!cq_host->enabled || !mmc->cqe_on || cq_host->recovery_halt)
+ 		return;
+@@ -358,15 +363,9 @@ static void cqhci_off(struct mmc_host *mmc)
+ 
+ 	cqhci_writel(cq_host, CQHCI_HALT, CQHCI_CTL);
+ 
+-	timeout = ktime_add_us(ktime_get(), CQHCI_OFF_TIMEOUT);
+-	while (1) {
+-		timed_out = ktime_compare(ktime_get(), timeout) > 0;
+-		reg = cqhci_readl(cq_host, CQHCI_CTL);
+-		if ((reg & CQHCI_HALT) || timed_out)
+-			break;
+-	}
+-
+-	if (timed_out)
++	err = readx_poll_timeout(cqhci_read_ctl, cq_host, reg,
++				 reg & CQHCI_HALT, 0, CQHCI_OFF_TIMEOUT);
++	if (err < 0)
+ 		pr_err("%s: cqhci: CQE stuck on\n", mmc_hostname(mmc));
+ 	else
+ 		pr_debug("%s: cqhci: CQE off\n", mmc_hostname(mmc));
+diff --git a/drivers/mmc/host/meson-mx-sdio.c b/drivers/mmc/host/meson-mx-sdio.c
+index 8b038e7b2cd3..2e58743d83bb 100644
+--- a/drivers/mmc/host/meson-mx-sdio.c
++++ b/drivers/mmc/host/meson-mx-sdio.c
+@@ -357,14 +357,6 @@ static void meson_mx_mmc_request(struct mmc_host *mmc, struct mmc_request *mrq)
+ 		meson_mx_mmc_start_cmd(mmc, mrq->cmd);
+ }
+ 
+-static int meson_mx_mmc_card_busy(struct mmc_host *mmc)
+-{
+-	struct meson_mx_mmc_host *host = mmc_priv(mmc);
+-	u32 irqc = readl(host->base + MESON_MX_SDIO_IRQC);
+-
+-	return !!(irqc & MESON_MX_SDIO_IRQC_FORCE_DATA_DAT_MASK);
+-}
+-
+ static void meson_mx_mmc_read_response(struct mmc_host *mmc,
+ 				       struct mmc_command *cmd)
+ {
+@@ -506,7 +498,6 @@ static void meson_mx_mmc_timeout(struct timer_list *t)
+ static struct mmc_host_ops meson_mx_mmc_ops = {
+ 	.request		= meson_mx_mmc_request,
+ 	.set_ios		= meson_mx_mmc_set_ios,
+-	.card_busy		= meson_mx_mmc_card_busy,
+ 	.get_cd			= mmc_gpio_get_cd,
+ 	.get_ro			= mmc_gpio_get_ro,
+ };
+@@ -570,7 +561,7 @@ static int meson_mx_mmc_add_host(struct meson_mx_mmc_host *host)
+ 	mmc->f_max = clk_round_rate(host->cfg_div_clk,
+ 				    clk_get_rate(host->parent_clk));
+ 
+-	mmc->caps |= MMC_CAP_ERASE | MMC_CAP_CMD23;
++	mmc->caps |= MMC_CAP_ERASE | MMC_CAP_CMD23 | MMC_CAP_WAIT_WHILE_BUSY;
+ 	mmc->ops = &meson_mx_mmc_ops;
+ 
+ 	ret = mmc_of_parse(mmc);
+diff --git a/drivers/mmc/host/sdhci-msm.c b/drivers/mmc/host/sdhci-msm.c
+index 3955fa5db43c..b68dcd1b0d50 100644
+--- a/drivers/mmc/host/sdhci-msm.c
++++ b/drivers/mmc/host/sdhci-msm.c
+@@ -2068,6 +2068,8 @@ static int sdhci_msm_probe(struct platform_device *pdev)
+ 		goto clk_disable;
+ 	}
+ 
++	msm_host->mmc->caps |= MMC_CAP_WAIT_WHILE_BUSY | MMC_CAP_NEED_RSP_BUSY;
++
+ 	pm_runtime_get_noresume(&pdev->dev);
+ 	pm_runtime_set_active(&pdev->dev);
+ 	pm_runtime_enable(&pdev->dev);
+diff --git a/drivers/mmc/host/sdhci-pci-core.c b/drivers/mmc/host/sdhci-pci-core.c
+index 525de2454a4d..2527244c2ae1 100644
+--- a/drivers/mmc/host/sdhci-pci-core.c
++++ b/drivers/mmc/host/sdhci-pci-core.c
+@@ -601,6 +601,9 @@ static int intel_select_drive_strength(struct mmc_card *card,
+ 	struct sdhci_pci_slot *slot = sdhci_priv(host);
+ 	struct intel_host *intel_host = sdhci_pci_priv(slot);
+ 
++	if (!(mmc_driver_type_mask(intel_host->drv_strength) & card_drv))
++		return 0;
++
+ 	return intel_host->drv_strength;
+ }
+ 
+diff --git a/drivers/mmc/host/sdhci-xenon.c b/drivers/mmc/host/sdhci-xenon.c
+index 1dea1ba66f7b..4703cd540c7f 100644
+--- a/drivers/mmc/host/sdhci-xenon.c
++++ b/drivers/mmc/host/sdhci-xenon.c
+@@ -235,6 +235,16 @@ static void xenon_voltage_switch(struct sdhci_host *host)
+ {
+ 	/* Wait for 5ms after set 1.8V signal enable bit */
+ 	usleep_range(5000, 5500);
++
++	/*
++	 * For some reason the controller's Host Control2 register reports
++	 * the bit representing 1.8V signaling as 0 when read after it was
++	 * written as 1. Subsequent read reports 1.
++	 *
++	 * Since this may cause some issues, do an empty read of the Host
++	 * Control2 register here to circumvent this.
++	 */
++	sdhci_readw(host, SDHCI_HOST_CONTROL2);
+ }
+ 
+ static const struct sdhci_ops sdhci_xenon_ops = {
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 652ca87dac94..fb4c35a43065 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -3580,6 +3580,8 @@ static int nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid)
+ 
+ 	return 0;
+  out_put_disk:
++	/* prevent double queue cleanup */
++	ns->disk->queue = NULL;
+ 	put_disk(ns->disk);
+  out_unlink_ns:
+ 	mutex_lock(&ctrl->subsys->lock);
+diff --git a/drivers/scsi/qla2xxx/qla_os.c b/drivers/scsi/qla2xxx/qla_os.c
+index 7a94e1171c72..98908c2a096a 100644
+--- a/drivers/scsi/qla2xxx/qla_os.c
++++ b/drivers/scsi/qla2xxx/qla_os.c
+@@ -3720,6 +3720,13 @@ qla2x00_remove_one(struct pci_dev *pdev)
+ 	}
+ 	qla2x00_wait_for_hba_ready(base_vha);
+ 
++	/*
++	 * if UNLOADING flag is already set, then continue unload,
++	 * where it was set first.
++	 */
++	if (test_and_set_bit(UNLOADING, &base_vha->dpc_flags))
++		return;
++
+ 	if (IS_QLA25XX(ha) || IS_QLA2031(ha) || IS_QLA27XX(ha) ||
+ 	    IS_QLA28XX(ha)) {
+ 		if (ha->flags.fw_started)
+@@ -3738,15 +3745,6 @@ qla2x00_remove_one(struct pci_dev *pdev)
+ 
+ 	qla2x00_wait_for_sess_deletion(base_vha);
+ 
+-	/*
+-	 * if UNLOAD flag is already set, then continue unload,
+-	 * where it was set first.
+-	 */
+-	if (test_bit(UNLOADING, &base_vha->dpc_flags))
+-		return;
+-
+-	set_bit(UNLOADING, &base_vha->dpc_flags);
+-
+ 	qla_nvme_delete(base_vha);
+ 
+ 	dma_free_coherent(&ha->pdev->dev,
+@@ -4856,6 +4854,9 @@ qla2x00_alloc_work(struct scsi_qla_host *vha, enum qla_work_type type)
+ 	struct qla_work_evt *e;
+ 	uint8_t bail;
+ 
++	if (test_bit(UNLOADING, &vha->dpc_flags))
++		return NULL;
++
+ 	QLA_VHA_MARK_BUSY(vha, bail);
+ 	if (bail)
+ 		return NULL;
+@@ -6044,13 +6045,6 @@ qla2x00_disable_board_on_pci_error(struct work_struct *work)
+ 	struct pci_dev *pdev = ha->pdev;
+ 	scsi_qla_host_t *base_vha = pci_get_drvdata(ha->pdev);
+ 
+-	/*
+-	 * if UNLOAD flag is already set, then continue unload,
+-	 * where it was set first.
+-	 */
+-	if (test_bit(UNLOADING, &base_vha->dpc_flags))
+-		return;
+-
+ 	ql_log(ql_log_warn, base_vha, 0x015b,
+ 	    "Disabling adapter.\n");
+ 
+@@ -6061,9 +6055,14 @@ qla2x00_disable_board_on_pci_error(struct work_struct *work)
+ 		return;
+ 	}
+ 
+-	qla2x00_wait_for_sess_deletion(base_vha);
++	/*
++	 * if UNLOADING flag is already set, then continue unload,
++	 * where it was set first.
++	 */
++	if (test_and_set_bit(UNLOADING, &base_vha->dpc_flags))
++		return;
+ 
+-	set_bit(UNLOADING, &base_vha->dpc_flags);
++	qla2x00_wait_for_sess_deletion(base_vha);
+ 
+ 	qla2x00_delete_all_vps(ha, base_vha);
+ 
+diff --git a/drivers/target/target_core_iblock.c b/drivers/target/target_core_iblock.c
+index 51ffd5c002de..1c181d31f4c8 100644
+--- a/drivers/target/target_core_iblock.c
++++ b/drivers/target/target_core_iblock.c
+@@ -432,7 +432,7 @@ iblock_execute_zero_out(struct block_device *bdev, struct se_cmd *cmd)
+ 				target_to_linux_sector(dev, cmd->t_task_lba),
+ 				target_to_linux_sector(dev,
+ 					sbc_get_write_same_sectors(cmd)),
+-				GFP_KERNEL, false);
++				GFP_KERNEL, BLKDEV_ZERO_NOUNMAP);
+ 	if (ret)
+ 		return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
+ 
+diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
+index a177bf2c6683..4315facf0243 100644
+--- a/drivers/vfio/vfio_iommu_type1.c
++++ b/drivers/vfio/vfio_iommu_type1.c
+@@ -341,8 +341,8 @@ static int vaddr_get_pfn(struct mm_struct *mm, unsigned long vaddr,
+ 	vma = find_vma_intersection(mm, vaddr, vaddr + 1);
+ 
+ 	if (vma && vma->vm_flags & VM_PFNMAP) {
+-		*pfn = ((vaddr - vma->vm_start) >> PAGE_SHIFT) + vma->vm_pgoff;
+-		if (is_invalid_reserved_pfn(*pfn))
++		if (!follow_pfn(vma, vaddr, pfn) &&
++		    is_invalid_reserved_pfn(*pfn))
+ 			ret = 0;
+ 	}
+ done:
+@@ -554,7 +554,7 @@ static int vfio_iommu_type1_pin_pages(void *iommu_data,
+ 			continue;
+ 		}
+ 
+-		remote_vaddr = dma->vaddr + iova - dma->iova;
++		remote_vaddr = dma->vaddr + (iova - dma->iova);
+ 		ret = vfio_pin_page_external(dma, remote_vaddr, &phys_pfn[i],
+ 					     do_accounting);
+ 		if (ret)
+diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c
+index c9a3bbc8c6af..f689fa74c33a 100644
+--- a/fs/btrfs/block-group.c
++++ b/fs/btrfs/block-group.c
+@@ -916,7 +916,7 @@ int btrfs_remove_block_group(struct btrfs_trans_handle *trans,
+ 	path = btrfs_alloc_path();
+ 	if (!path) {
+ 		ret = -ENOMEM;
+-		goto out;
++		goto out_put_group;
+ 	}
+ 
+ 	/*
+@@ -954,7 +954,7 @@ int btrfs_remove_block_group(struct btrfs_trans_handle *trans,
+ 		ret = btrfs_orphan_add(trans, BTRFS_I(inode));
+ 		if (ret) {
+ 			btrfs_add_delayed_iput(inode);
+-			goto out;
++			goto out_put_group;
+ 		}
+ 		clear_nlink(inode);
+ 		/* One for the block groups ref */
+@@ -977,13 +977,13 @@ int btrfs_remove_block_group(struct btrfs_trans_handle *trans,
+ 
+ 	ret = btrfs_search_slot(trans, tree_root, &key, path, -1, 1);
+ 	if (ret < 0)
+-		goto out;
++		goto out_put_group;
+ 	if (ret > 0)
+ 		btrfs_release_path(path);
+ 	if (ret == 0) {
+ 		ret = btrfs_del_item(trans, tree_root, path);
+ 		if (ret)
+-			goto out;
++			goto out_put_group;
+ 		btrfs_release_path(path);
+ 	}
+ 
+@@ -1102,9 +1102,9 @@ int btrfs_remove_block_group(struct btrfs_trans_handle *trans,
+ 
+ 	ret = remove_block_group_free_space(trans, block_group);
+ 	if (ret)
+-		goto out;
++		goto out_put_group;
+ 
+-	btrfs_put_block_group(block_group);
++	/* Once for the block groups rbtree */
+ 	btrfs_put_block_group(block_group);
+ 
+ 	ret = btrfs_search_slot(trans, root, &key, path, -1, 1);
+@@ -1127,6 +1127,10 @@ int btrfs_remove_block_group(struct btrfs_trans_handle *trans,
+ 		/* once for the tree */
+ 		free_extent_map(em);
+ 	}
++
++out_put_group:
++	/* Once for the lookup reference */
++	btrfs_put_block_group(block_group);
+ out:
+ 	if (remove_rsv)
+ 		btrfs_delayed_refs_rsv_release(fs_info, 1);
+diff --git a/fs/btrfs/relocation.c b/fs/btrfs/relocation.c
+index 696e769d069a..8cb02b5417c5 100644
+--- a/fs/btrfs/relocation.c
++++ b/fs/btrfs/relocation.c
+@@ -4614,6 +4614,7 @@ int btrfs_recover_relocation(struct btrfs_root *root)
+ 		if (IS_ERR(fs_root)) {
+ 			err = PTR_ERR(fs_root);
+ 			list_add_tail(&reloc_root->root_list, &reloc_roots);
++			btrfs_end_transaction(trans);
+ 			goto out_unset;
+ 		}
+ 
+diff --git a/fs/btrfs/transaction.c b/fs/btrfs/transaction.c
+index beb6c69cd1e5..a209e2ef547f 100644
+--- a/fs/btrfs/transaction.c
++++ b/fs/btrfs/transaction.c
+@@ -660,10 +660,19 @@ again:
+ 	}
+ 
+ got_it:
+-	btrfs_record_root_in_trans(h, root);
+-
+ 	if (!current->journal_info)
+ 		current->journal_info = h;
++
++	/*
++	 * btrfs_record_root_in_trans() needs to alloc new extents, and may
++	 * call btrfs_join_transaction() while we're also starting a
++	 * transaction.
++	 *
++	 * Thus it need to be called after current->journal_info initialized,
++	 * or we can deadlock.
++	 */
++	btrfs_record_root_in_trans(h, root);
++
+ 	return h;
+ 
+ join_fail:
+diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c
+index 7dd7552f53a4..61b9770ca78f 100644
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -4211,6 +4211,9 @@ static int btrfs_log_prealloc_extents(struct btrfs_trans_handle *trans,
+ 	const u64 ino = btrfs_ino(inode);
+ 	struct btrfs_path *dst_path = NULL;
+ 	bool dropped_extents = false;
++	u64 truncate_offset = i_size;
++	struct extent_buffer *leaf;
++	int slot;
+ 	int ins_nr = 0;
+ 	int start_slot;
+ 	int ret;
+@@ -4225,9 +4228,43 @@ static int btrfs_log_prealloc_extents(struct btrfs_trans_handle *trans,
+ 	if (ret < 0)
+ 		goto out;
+ 
++	/*
++	 * We must check if there is a prealloc extent that starts before the
++	 * i_size and crosses the i_size boundary. This is to ensure later we
++	 * truncate down to the end of that extent and not to the i_size, as
++	 * otherwise we end up losing part of the prealloc extent after a log
++	 * replay and with an implicit hole if there is another prealloc extent
++	 * that starts at an offset beyond i_size.
++	 */
++	ret = btrfs_previous_item(root, path, ino, BTRFS_EXTENT_DATA_KEY);
++	if (ret < 0)
++		goto out;
++
++	if (ret == 0) {
++		struct btrfs_file_extent_item *ei;
++
++		leaf = path->nodes[0];
++		slot = path->slots[0];
++		ei = btrfs_item_ptr(leaf, slot, struct btrfs_file_extent_item);
++
++		if (btrfs_file_extent_type(leaf, ei) ==
++		    BTRFS_FILE_EXTENT_PREALLOC) {
++			u64 extent_end;
++
++			btrfs_item_key_to_cpu(leaf, &key, slot);
++			extent_end = key.offset +
++				btrfs_file_extent_num_bytes(leaf, ei);
++
++			if (extent_end > i_size)
++				truncate_offset = extent_end;
++		}
++	} else {
++		ret = 0;
++	}
++
+ 	while (true) {
+-		struct extent_buffer *leaf = path->nodes[0];
+-		int slot = path->slots[0];
++		leaf = path->nodes[0];
++		slot = path->slots[0];
+ 
+ 		if (slot >= btrfs_header_nritems(leaf)) {
+ 			if (ins_nr > 0) {
+@@ -4265,7 +4302,7 @@ static int btrfs_log_prealloc_extents(struct btrfs_trans_handle *trans,
+ 				ret = btrfs_truncate_inode_items(trans,
+ 							 root->log_root,
+ 							 &inode->vfs_inode,
+-							 i_size,
++							 truncate_offset,
+ 							 BTRFS_EXTENT_DATA_KEY);
+ 			} while (ret == -EAGAIN);
+ 			if (ret)
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index a46de2cfc28e..38b25f599896 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -479,6 +479,7 @@ enum {
+ 	REQ_F_COMP_LOCKED_BIT,
+ 	REQ_F_NEED_CLEANUP_BIT,
+ 	REQ_F_OVERFLOW_BIT,
++	REQ_F_NO_FILE_TABLE_BIT,
+ };
+ 
+ enum {
+@@ -521,6 +522,8 @@ enum {
+ 	REQ_F_NEED_CLEANUP	= BIT(REQ_F_NEED_CLEANUP_BIT),
+ 	/* in overflow list */
+ 	REQ_F_OVERFLOW		= BIT(REQ_F_OVERFLOW_BIT),
++	/* doesn't need file table for this request */
++	REQ_F_NO_FILE_TABLE	= BIT(REQ_F_NO_FILE_TABLE_BIT),
+ };
+ 
+ /*
+@@ -711,6 +714,7 @@ static const struct io_op_def io_op_defs[] = {
+ 		.needs_file		= 1,
+ 		.fd_non_neg		= 1,
+ 		.needs_fs		= 1,
++		.file_table		= 1,
+ 	},
+ 	[IORING_OP_READ] = {
+ 		.needs_mm		= 1,
+@@ -2843,8 +2847,12 @@ static int io_statx(struct io_kiocb *req, struct io_kiocb **nxt,
+ 	struct kstat stat;
+ 	int ret;
+ 
+-	if (force_nonblock)
++	if (force_nonblock) {
++		/* only need file table for an actual valid fd */
++		if (ctx->dfd == -1 || ctx->dfd == AT_FDCWD)
++			req->flags |= REQ_F_NO_FILE_TABLE;
+ 		return -EAGAIN;
++	}
+ 
+ 	if (vfs_stat_set_lookup_flags(&lookup_flags, ctx->how.flags))
+ 		return -EINVAL;
+@@ -4632,7 +4640,7 @@ static int io_grab_files(struct io_kiocb *req)
+ 	int ret = -EBADF;
+ 	struct io_ring_ctx *ctx = req->ctx;
+ 
+-	if (req->work.files)
++	if (req->work.files || (req->flags & REQ_F_NO_FILE_TABLE))
+ 		return 0;
+ 	if (!ctx->ring_file)
+ 		return -EBADF;
+diff --git a/fs/nfs/nfs3acl.c b/fs/nfs/nfs3acl.c
+index c5c3fc6e6c60..26c94b32d6f4 100644
+--- a/fs/nfs/nfs3acl.c
++++ b/fs/nfs/nfs3acl.c
+@@ -253,37 +253,45 @@ int nfs3_proc_setacls(struct inode *inode, struct posix_acl *acl,
+ 
+ int nfs3_set_acl(struct inode *inode, struct posix_acl *acl, int type)
+ {
+-	struct posix_acl *alloc = NULL, *dfacl = NULL;
++	struct posix_acl *orig = acl, *dfacl = NULL, *alloc;
+ 	int status;
+ 
+ 	if (S_ISDIR(inode->i_mode)) {
+ 		switch(type) {
+ 		case ACL_TYPE_ACCESS:
+-			alloc = dfacl = get_acl(inode, ACL_TYPE_DEFAULT);
++			alloc = get_acl(inode, ACL_TYPE_DEFAULT);
+ 			if (IS_ERR(alloc))
+ 				goto fail;
++			dfacl = alloc;
+ 			break;
+ 
+ 		case ACL_TYPE_DEFAULT:
+-			dfacl = acl;
+-			alloc = acl = get_acl(inode, ACL_TYPE_ACCESS);
++			alloc = get_acl(inode, ACL_TYPE_ACCESS);
+ 			if (IS_ERR(alloc))
+ 				goto fail;
++			dfacl = acl;
++			acl = alloc;
+ 			break;
+ 		}
+ 	}
+ 
+ 	if (acl == NULL) {
+-		alloc = acl = posix_acl_from_mode(inode->i_mode, GFP_KERNEL);
++		alloc = posix_acl_from_mode(inode->i_mode, GFP_KERNEL);
+ 		if (IS_ERR(alloc))
+ 			goto fail;
++		acl = alloc;
+ 	}
+ 	status = __nfs3_proc_setacls(inode, acl, dfacl);
+-	posix_acl_release(alloc);
++out:
++	if (acl != orig)
++		posix_acl_release(acl);
++	if (dfacl != orig)
++		posix_acl_release(dfacl);
+ 	return status;
+ 
+ fail:
+-	return PTR_ERR(alloc);
++	status = PTR_ERR(alloc);
++	goto out;
+ }
+ 
+ const struct xattr_handler *nfs3_xattr_handlers[] = {
+diff --git a/fs/nfs/nfs4proc.c b/fs/nfs/nfs4proc.c
+index 69b7ab7a5815..1b1e21bcb994 100644
+--- a/fs/nfs/nfs4proc.c
++++ b/fs/nfs/nfs4proc.c
+@@ -7893,6 +7893,7 @@ static void
+ nfs4_bind_one_conn_to_session_done(struct rpc_task *task, void *calldata)
+ {
+ 	struct nfs41_bind_conn_to_session_args *args = task->tk_msg.rpc_argp;
++	struct nfs41_bind_conn_to_session_res *res = task->tk_msg.rpc_resp;
+ 	struct nfs_client *clp = args->client;
+ 
+ 	switch (task->tk_status) {
+@@ -7901,6 +7902,12 @@ nfs4_bind_one_conn_to_session_done(struct rpc_task *task, void *calldata)
+ 		nfs4_schedule_session_recovery(clp->cl_session,
+ 				task->tk_status);
+ 	}
++	if (args->dir == NFS4_CDFC4_FORE_OR_BOTH &&
++			res->dir != NFS4_CDFS4_BOTH) {
++		rpc_task_close_connection(task);
++		if (args->retries++ < MAX_BIND_CONN_TO_SESSION_RETRIES)
++			rpc_restart_call(task);
++	}
+ }
+ 
+ static const struct rpc_call_ops nfs4_bind_one_conn_to_session_ops = {
+@@ -7923,6 +7930,7 @@ int nfs4_proc_bind_one_conn_to_session(struct rpc_clnt *clnt,
+ 	struct nfs41_bind_conn_to_session_args args = {
+ 		.client = clp,
+ 		.dir = NFS4_CDFC4_FORE_OR_BOTH,
++		.retries = 0,
+ 	};
+ 	struct nfs41_bind_conn_to_session_res res;
+ 	struct rpc_message msg = {
+diff --git a/fs/ocfs2/dlmfs/dlmfs.c b/fs/ocfs2/dlmfs/dlmfs.c
+index 8e4f1ace467c..1de77f1a600b 100644
+--- a/fs/ocfs2/dlmfs/dlmfs.c
++++ b/fs/ocfs2/dlmfs/dlmfs.c
+@@ -275,7 +275,6 @@ static ssize_t dlmfs_file_write(struct file *filp,
+ 				loff_t *ppos)
+ {
+ 	int bytes_left;
+-	ssize_t writelen;
+ 	char *lvb_buf;
+ 	struct inode *inode = file_inode(filp);
+ 
+@@ -285,32 +284,30 @@ static ssize_t dlmfs_file_write(struct file *filp,
+ 	if (*ppos >= i_size_read(inode))
+ 		return -ENOSPC;
+ 
++	/* don't write past the lvb */
++	if (count > i_size_read(inode) - *ppos)
++		count = i_size_read(inode) - *ppos;
++
+ 	if (!count)
+ 		return 0;
+ 
+ 	if (!access_ok(buf, count))
+ 		return -EFAULT;
+ 
+-	/* don't write past the lvb */
+-	if ((count + *ppos) > i_size_read(inode))
+-		writelen = i_size_read(inode) - *ppos;
+-	else
+-		writelen = count - *ppos;
+-
+-	lvb_buf = kmalloc(writelen, GFP_NOFS);
++	lvb_buf = kmalloc(count, GFP_NOFS);
+ 	if (!lvb_buf)
+ 		return -ENOMEM;
+ 
+-	bytes_left = copy_from_user(lvb_buf, buf, writelen);
+-	writelen -= bytes_left;
+-	if (writelen)
+-		user_dlm_write_lvb(inode, lvb_buf, writelen);
++	bytes_left = copy_from_user(lvb_buf, buf, count);
++	count -= bytes_left;
++	if (count)
++		user_dlm_write_lvb(inode, lvb_buf, count);
+ 
+ 	kfree(lvb_buf);
+ 
+-	*ppos = *ppos + writelen;
+-	mlog(0, "wrote %zd bytes\n", writelen);
+-	return writelen;
++	*ppos = *ppos + count;
++	mlog(0, "wrote %zu bytes\n", count);
++	return count;
+ }
+ 
+ static void dlmfs_init_once(void *foo)
+diff --git a/fs/super.c b/fs/super.c
+index cd352530eca9..a288cd60d2ae 100644
+--- a/fs/super.c
++++ b/fs/super.c
+@@ -1302,8 +1302,8 @@ int get_tree_bdev(struct fs_context *fc,
+ 	mutex_lock(&bdev->bd_fsfreeze_mutex);
+ 	if (bdev->bd_fsfreeze_count > 0) {
+ 		mutex_unlock(&bdev->bd_fsfreeze_mutex);
+-		blkdev_put(bdev, mode);
+ 		warnf(fc, "%pg: Can't mount, blockdev is frozen", bdev);
++		blkdev_put(bdev, mode);
+ 		return -EBUSY;
+ 	}
+ 
+diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
+index 64461fc64e1b..7adc007f2023 100644
+--- a/include/linux/dmaengine.h
++++ b/include/linux/dmaengine.h
+@@ -336,13 +336,11 @@ struct dma_chan {
+  * @chan: driver channel device
+  * @device: sysfs device
+  * @dev_id: parent dma_device dev_id
+- * @idr_ref: reference count to gate release of dma_device dev_id
+  */
+ struct dma_chan_dev {
+ 	struct dma_chan *chan;
+ 	struct device device;
+ 	int dev_id;
+-	atomic_t *idr_ref;
+ };
+ 
+ /**
+@@ -827,6 +825,8 @@ struct dma_device {
+ 	int dev_id;
+ 	struct device *dev;
+ 	struct module *owner;
++	struct ida chan_ida;
++	struct mutex chan_mutex;	/* to protect chan_ida */
+ 
+ 	u32 src_addr_widths;
+ 	u32 dst_addr_widths;
+diff --git a/include/linux/nfs_xdr.h b/include/linux/nfs_xdr.h
+index 94c77ed55ce1..faa150b4e85d 100644
+--- a/include/linux/nfs_xdr.h
++++ b/include/linux/nfs_xdr.h
+@@ -1307,11 +1307,13 @@ struct nfs41_impl_id {
+ 	struct nfstime4			date;
+ };
+ 
++#define MAX_BIND_CONN_TO_SESSION_RETRIES 3
+ struct nfs41_bind_conn_to_session_args {
+ 	struct nfs_client		*client;
+ 	struct nfs4_sessionid		sessionid;
+ 	u32				dir;
+ 	bool				use_conn_in_rdma_mode;
++	int				retries;
+ };
+ 
+ struct nfs41_bind_conn_to_session_res {
+diff --git a/include/linux/sunrpc/clnt.h b/include/linux/sunrpc/clnt.h
+index ca7e108248e2..cc20a0816830 100644
+--- a/include/linux/sunrpc/clnt.h
++++ b/include/linux/sunrpc/clnt.h
+@@ -236,4 +236,9 @@ static inline int rpc_reply_expected(struct rpc_task *task)
+ 		(task->tk_msg.rpc_proc->p_decode != NULL);
+ }
+ 
++static inline void rpc_task_close_connection(struct rpc_task *task)
++{
++	if (task->tk_xprt)
++		xprt_force_disconnect(task->tk_xprt);
++}
+ #endif /* _LINUX_SUNRPC_CLNT_H */
+diff --git a/include/uapi/linux/dma-buf.h b/include/uapi/linux/dma-buf.h
+index dbc7092e04b5..7f30393b92c3 100644
+--- a/include/uapi/linux/dma-buf.h
++++ b/include/uapi/linux/dma-buf.h
+@@ -39,6 +39,12 @@ struct dma_buf_sync {
+ 
+ #define DMA_BUF_BASE		'b'
+ #define DMA_BUF_IOCTL_SYNC	_IOW(DMA_BUF_BASE, 0, struct dma_buf_sync)
++
++/* 32/64bitness of this uapi was botched in android, there's no difference
++ * between them in actual uapi, they're just different numbers.
++ */
+ #define DMA_BUF_SET_NAME	_IOW(DMA_BUF_BASE, 1, const char *)
++#define DMA_BUF_SET_NAME_A	_IOW(DMA_BUF_BASE, 1, u32)
++#define DMA_BUF_SET_NAME_B	_IOW(DMA_BUF_BASE, 1, u64)
+ 
+ #endif
+diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c
+index 6dbeedb7354c..daf3ea9d81de 100644
+--- a/kernel/power/hibernate.c
++++ b/kernel/power/hibernate.c
+@@ -898,6 +898,13 @@ static int software_resume(void)
+ 	error = freeze_processes();
+ 	if (error)
+ 		goto Close_Finish;
++
++	error = freeze_kernel_threads();
++	if (error) {
++		thaw_processes();
++		goto Close_Finish;
++	}
++
+ 	error = load_image_and_restore();
+ 	thaw_processes();
+  Finish:
+diff --git a/security/selinux/hooks.c b/security/selinux/hooks.c
+index 1659b59fb5d7..053269461bcc 100644
+--- a/security/selinux/hooks.c
++++ b/security/selinux/hooks.c
+@@ -5829,40 +5829,60 @@ static unsigned int selinux_ipv6_postroute(void *priv,
+ 
+ static int selinux_netlink_send(struct sock *sk, struct sk_buff *skb)
+ {
+-	int err = 0;
+-	u32 perm;
++	int rc = 0;
++	unsigned int msg_len;
++	unsigned int data_len = skb->len;
++	unsigned char *data = skb->data;
+ 	struct nlmsghdr *nlh;
+ 	struct sk_security_struct *sksec = sk->sk_security;
++	u16 sclass = sksec->sclass;
++	u32 perm;
+ 
+-	if (skb->len < NLMSG_HDRLEN) {
+-		err = -EINVAL;
+-		goto out;
+-	}
+-	nlh = nlmsg_hdr(skb);
++	while (data_len >= nlmsg_total_size(0)) {
++		nlh = (struct nlmsghdr *)data;
++
++		/* NOTE: the nlmsg_len field isn't reliably set by some netlink
++		 *       users which means we can't reject skb's with bogus
++		 *       length fields; our solution is to follow what
++		 *       netlink_rcv_skb() does and simply skip processing at
++		 *       messages with length fields that are clearly junk
++		 */
++		if (nlh->nlmsg_len < NLMSG_HDRLEN || nlh->nlmsg_len > data_len)
++			return 0;
+ 
+-	err = selinux_nlmsg_lookup(sksec->sclass, nlh->nlmsg_type, &perm);
+-	if (err) {
+-		if (err == -EINVAL) {
++		rc = selinux_nlmsg_lookup(sclass, nlh->nlmsg_type, &perm);
++		if (rc == 0) {
++			rc = sock_has_perm(sk, perm);
++			if (rc)
++				return rc;
++		} else if (rc == -EINVAL) {
++			/* -EINVAL is a missing msg/perm mapping */
+ 			pr_warn_ratelimited("SELinux: unrecognized netlink"
+-			       " message: protocol=%hu nlmsg_type=%hu sclass=%s"
+-			       " pid=%d comm=%s\n",
+-			       sk->sk_protocol, nlh->nlmsg_type,
+-			       secclass_map[sksec->sclass - 1].name,
+-			       task_pid_nr(current), current->comm);
+-			if (!enforcing_enabled(&selinux_state) ||
+-			    security_get_allow_unknown(&selinux_state))
+-				err = 0;
++				" message: protocol=%hu nlmsg_type=%hu sclass=%s"
++				" pid=%d comm=%s\n",
++				sk->sk_protocol, nlh->nlmsg_type,
++				secclass_map[sclass - 1].name,
++				task_pid_nr(current), current->comm);
++			if (enforcing_enabled(&selinux_state) &&
++			    !security_get_allow_unknown(&selinux_state))
++				return rc;
++			rc = 0;
++		} else if (rc == -ENOENT) {
++			/* -ENOENT is a missing socket/class mapping, ignore */
++			rc = 0;
++		} else {
++			return rc;
+ 		}
+ 
+-		/* Ignore */
+-		if (err == -ENOENT)
+-			err = 0;
+-		goto out;
++		/* move to the next message after applying netlink padding */
++		msg_len = NLMSG_ALIGN(nlh->nlmsg_len);
++		if (msg_len >= data_len)
++			return 0;
++		data_len -= msg_len;
++		data += msg_len;
+ 	}
+ 
+-	err = sock_has_perm(sk, perm);
+-out:
+-	return err;
++	return rc;
+ }
+ 
+ static void ipc_init_security(struct ipc_security_struct *isec, u16 sclass)
+diff --git a/sound/core/oss/pcm_plugin.c b/sound/core/oss/pcm_plugin.c
+index 50c35ecc8953..d1760f86773c 100644
+--- a/sound/core/oss/pcm_plugin.c
++++ b/sound/core/oss/pcm_plugin.c
+@@ -211,21 +211,23 @@ static snd_pcm_sframes_t plug_client_size(struct snd_pcm_substream *plug,
+ 	if (stream == SNDRV_PCM_STREAM_PLAYBACK) {
+ 		plugin = snd_pcm_plug_last(plug);
+ 		while (plugin && drv_frames > 0) {
+-			if (check_size && drv_frames > plugin->buf_frames)
+-				drv_frames = plugin->buf_frames;
+ 			plugin_prev = plugin->prev;
+ 			if (plugin->src_frames)
+ 				drv_frames = plugin->src_frames(plugin, drv_frames);
++			if (check_size && plugin->buf_frames &&
++			    drv_frames > plugin->buf_frames)
++				drv_frames = plugin->buf_frames;
+ 			plugin = plugin_prev;
+ 		}
+ 	} else if (stream == SNDRV_PCM_STREAM_CAPTURE) {
+ 		plugin = snd_pcm_plug_first(plug);
+ 		while (plugin && drv_frames > 0) {
+ 			plugin_next = plugin->next;
++			if (check_size && plugin->buf_frames &&
++			    drv_frames > plugin->buf_frames)
++				drv_frames = plugin->buf_frames;
+ 			if (plugin->dst_frames)
+ 				drv_frames = plugin->dst_frames(plugin, drv_frames);
+-			if (check_size && drv_frames > plugin->buf_frames)
+-				drv_frames = plugin->buf_frames;
+ 			plugin = plugin_next;
+ 		}
+ 	} else
+@@ -251,26 +253,28 @@ static snd_pcm_sframes_t plug_slave_size(struct snd_pcm_substream *plug,
+ 		plugin = snd_pcm_plug_first(plug);
+ 		while (plugin && frames > 0) {
+ 			plugin_next = plugin->next;
++			if (check_size && plugin->buf_frames &&
++			    frames > plugin->buf_frames)
++				frames = plugin->buf_frames;
+ 			if (plugin->dst_frames) {
+ 				frames = plugin->dst_frames(plugin, frames);
+ 				if (frames < 0)
+ 					return frames;
+ 			}
+-			if (check_size && frames > plugin->buf_frames)
+-				frames = plugin->buf_frames;
+ 			plugin = plugin_next;
+ 		}
+ 	} else if (stream == SNDRV_PCM_STREAM_CAPTURE) {
+ 		plugin = snd_pcm_plug_last(plug);
+ 		while (plugin) {
+-			if (check_size && frames > plugin->buf_frames)
+-				frames = plugin->buf_frames;
+ 			plugin_prev = plugin->prev;
+ 			if (plugin->src_frames) {
+ 				frames = plugin->src_frames(plugin, frames);
+ 				if (frames < 0)
+ 					return frames;
+ 			}
++			if (check_size && plugin->buf_frames &&
++			    frames > plugin->buf_frames)
++				frames = plugin->buf_frames;
+ 			plugin = plugin_prev;
+ 		}
+ 	} else
+diff --git a/sound/isa/opti9xx/miro.c b/sound/isa/opti9xx/miro.c
+index e764816a8f7a..b039429e6871 100644
+--- a/sound/isa/opti9xx/miro.c
++++ b/sound/isa/opti9xx/miro.c
+@@ -867,10 +867,13 @@ static void snd_miro_write(struct snd_miro *chip, unsigned char reg,
+ 	spin_unlock_irqrestore(&chip->lock, flags);
+ }
+ 
++static inline void snd_miro_write_mask(struct snd_miro *chip,
++		unsigned char reg, unsigned char value, unsigned char mask)
++{
++	unsigned char oldval = snd_miro_read(chip, reg);
+ 
+-#define snd_miro_write_mask(chip, reg, value, mask)	\
+-	snd_miro_write(chip, reg,			\
+-		(snd_miro_read(chip, reg) & ~(mask)) | ((value) & (mask)))
++	snd_miro_write(chip, reg, (oldval & ~mask) | (value & mask));
++}
+ 
+ /*
+  *  Proc Interface
+diff --git a/sound/isa/opti9xx/opti92x-ad1848.c b/sound/isa/opti9xx/opti92x-ad1848.c
+index d06b29693c85..0e6d20e49158 100644
+--- a/sound/isa/opti9xx/opti92x-ad1848.c
++++ b/sound/isa/opti9xx/opti92x-ad1848.c
+@@ -317,10 +317,13 @@ static void snd_opti9xx_write(struct snd_opti9xx *chip, unsigned char reg,
+ }
+ 
+ 
+-#define snd_opti9xx_write_mask(chip, reg, value, mask)	\
+-	snd_opti9xx_write(chip, reg,			\
+-		(snd_opti9xx_read(chip, reg) & ~(mask)) | ((value) & (mask)))
++static inline void snd_opti9xx_write_mask(struct snd_opti9xx *chip,
++		unsigned char reg, unsigned char value, unsigned char mask)
++{
++	unsigned char oldval = snd_opti9xx_read(chip, reg);
+ 
++	snd_opti9xx_write(chip, reg, (oldval & ~mask) | (value & mask));
++}
+ 
+ static int snd_opti9xx_configure(struct snd_opti9xx *chip,
+ 					   long port,
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index 8bc4d66ff986..0c1a59d5ad59 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -1934,8 +1934,10 @@ static bool check_non_pcm_per_cvt(struct hda_codec *codec, hda_nid_t cvt_nid)
+ 	/* Add sanity check to pass klockwork check.
+ 	 * This should never happen.
+ 	 */
+-	if (WARN_ON(spdif == NULL))
++	if (WARN_ON(spdif == NULL)) {
++		mutex_unlock(&codec->spdif_mutex);
+ 		return true;
++	}
+ 	non_pcm = !!(spdif->status & IEC958_AES0_NONAUDIO);
+ 	mutex_unlock(&codec->spdif_mutex);
+ 	return non_pcm;
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index f2fccf267b48..da4863d7f7f2 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -7295,6 +7295,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1558, 0x8560, "System76 Gazelle (gaze14)", ALC269_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1558, 0x8561, "System76 Gazelle (gaze14)", ALC269_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x1036, "Lenovo P520", ALC233_FIXUP_LENOVO_MULTI_CODECS),
++	SND_PCI_QUIRK(0x17aa, 0x1048, "ThinkCentre Station", ALC283_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x17aa, 0x20f2, "Thinkpad SL410/510", ALC269_FIXUP_SKU_IGNORE),
+ 	SND_PCI_QUIRK(0x17aa, 0x215e, "Thinkpad L512", ALC269_FIXUP_SKU_IGNORE),
+ 	SND_PCI_QUIRK(0x17aa, 0x21b8, "Thinkpad Edge 14", ALC269_FIXUP_SKU_IGNORE),
+diff --git a/sound/usb/line6/podhd.c b/sound/usb/line6/podhd.c
+index d37db32ecd3b..e39dc85c355a 100644
+--- a/sound/usb/line6/podhd.c
++++ b/sound/usb/line6/podhd.c
+@@ -21,8 +21,7 @@
+ enum {
+ 	LINE6_PODHD300,
+ 	LINE6_PODHD400,
+-	LINE6_PODHD500_0,
+-	LINE6_PODHD500_1,
++	LINE6_PODHD500,
+ 	LINE6_PODX3,
+ 	LINE6_PODX3LIVE,
+ 	LINE6_PODHD500X,
+@@ -318,8 +317,7 @@ static const struct usb_device_id podhd_id_table[] = {
+ 	/* TODO: no need to alloc data interfaces when only audio is used */
+ 	{ LINE6_DEVICE(0x5057),    .driver_info = LINE6_PODHD300 },
+ 	{ LINE6_DEVICE(0x5058),    .driver_info = LINE6_PODHD400 },
+-	{ LINE6_IF_NUM(0x414D, 0), .driver_info = LINE6_PODHD500_0 },
+-	{ LINE6_IF_NUM(0x414D, 1), .driver_info = LINE6_PODHD500_1 },
++	{ LINE6_IF_NUM(0x414D, 0), .driver_info = LINE6_PODHD500 },
+ 	{ LINE6_IF_NUM(0x414A, 0), .driver_info = LINE6_PODX3 },
+ 	{ LINE6_IF_NUM(0x414B, 0), .driver_info = LINE6_PODX3LIVE },
+ 	{ LINE6_IF_NUM(0x4159, 0), .driver_info = LINE6_PODHD500X },
+@@ -352,23 +350,13 @@ static const struct line6_properties podhd_properties_table[] = {
+ 		.ep_audio_r = 0x82,
+ 		.ep_audio_w = 0x01,
+ 	},
+-	[LINE6_PODHD500_0] = {
++	[LINE6_PODHD500] = {
+ 		.id = "PODHD500",
+ 		.name = "POD HD500",
+-		.capabilities	= LINE6_CAP_PCM
++		.capabilities	= LINE6_CAP_PCM | LINE6_CAP_CONTROL
+ 				| LINE6_CAP_HWMON,
+ 		.altsetting = 1,
+-		.ep_ctrl_r = 0x81,
+-		.ep_ctrl_w = 0x01,
+-		.ep_audio_r = 0x86,
+-		.ep_audio_w = 0x02,
+-	},
+-	[LINE6_PODHD500_1] = {
+-		.id = "PODHD500",
+-		.name = "POD HD500",
+-		.capabilities	= LINE6_CAP_PCM
+-				| LINE6_CAP_HWMON,
+-		.altsetting = 0,
++		.ctrl_if = 1,
+ 		.ep_ctrl_r = 0x81,
+ 		.ep_ctrl_w = 0x01,
+ 		.ep_audio_r = 0x86,
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index 7f558f4b4520..0686e056e39b 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -1643,7 +1643,7 @@ u64 snd_usb_interface_dsd_format_quirks(struct snd_usb_audio *chip,
+ 
+ 	case USB_ID(0x0d8c, 0x0316): /* Hegel HD12 DSD */
+ 	case USB_ID(0x10cb, 0x0103): /* The Bit Opus #3; with fp->dsd_raw */
+-	case USB_ID(0x16b0, 0x06b2): /* NuPrime DAC-10 */
++	case USB_ID(0x16d0, 0x06b2): /* NuPrime DAC-10 */
+ 	case USB_ID(0x16d0, 0x09dd): /* Encore mDSD */
+ 	case USB_ID(0x16d0, 0x0733): /* Furutech ADL Stratos */
+ 	case USB_ID(0x16d0, 0x09db): /* NuPrime Audio DAC-9 */


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [gentoo-commits] proj/linux-patches:5.6 commit in: /
@ 2020-05-09 19:45 Mike Pagano
  0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2020-05-09 19:45 UTC (permalink / raw
  To: gentoo-commits

commit:     b7d15285e95afc9035f1723b7241e0bfe5947ab9
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sat May  9 19:43:00 2020 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sat May  9 19:43:00 2020 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=b7d15285

x86: Fix early boot crash on gcc-10

Bug: https://bugs.gentoo.org/720776

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                |   4 +
 1700_x86-gcc-10-early-boot-crash-fix.patch | 131 +++++++++++++++++++++++++++++
 2 files changed, 135 insertions(+)

diff --git a/0000_README b/0000_README
index 13f0a7d..9c9c8b5 100644
--- a/0000_README
+++ b/0000_README
@@ -95,6 +95,10 @@ Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.
 
+Patch:  1700_x86-gcc-10-early-boot-crash-fix.patch
+From:   https://https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git/patch/?id=f670269a42bfdd2c83a1118cc3d1b475547eac22
+Desc:   x86: Fix early boot crash on gcc-10, 
+
 Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758

diff --git a/1700_x86-gcc-10-early-boot-crash-fix.patch b/1700_x86-gcc-10-early-boot-crash-fix.patch
new file mode 100644
index 0000000..8cdf651
--- /dev/null
+++ b/1700_x86-gcc-10-early-boot-crash-fix.patch
@@ -0,0 +1,131 @@
+From f670269a42bfdd2c83a1118cc3d1b475547eac22 Mon Sep 17 00:00:00 2001
+From: Borislav Petkov <bp@suse.de>
+Date: Wed, 22 Apr 2020 18:11:30 +0200
+Subject: x86: Fix early boot crash on gcc-10, next try
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+... or the odyssey of trying to disable the stack protector for the
+function which generates the stack canary value.
+
+The whole story started with Sergei reporting a boot crash with a kernel
+built with gcc-10:
+
+  Kernel panic — not syncing: stack-protector: Kernel stack is corrupted in: start_secondary
+  CPU: 1 PID: 0 Comm: swapper/1 Not tainted 5.6.0-rc5—00235—gfffb08b37df9 #139
+  Hardware name: Gigabyte Technology Co., Ltd. To be filled by O.E.M./H77M—D3H, BIOS F12 11/14/2013
+  Call Trace:
+    dump_stack
+    panic
+    ? start_secondary
+    __stack_chk_fail
+    start_secondary
+    secondary_startup_64
+  -—-[ end Kernel panic — not syncing: stack—protector: Kernel stack is corrupted in: start_secondary
+
+This happens because gcc-10 tail-call optimizes the last function call
+in start_secondary() - cpu_startup_entry() - and thus emits a stack
+canary check which fails because the canary value changes after the
+boot_init_stack_canary() call.
+
+To fix that, the initial attempt was to mark the one function which
+generates the stack canary with:
+
+  __attribute__((optimize("-fno-stack-protector"))) ... start_secondary(void *unused)
+
+however, using the optimize attribute doesn't work cumulatively
+as the attribute does not add to but rather replaces previously
+supplied optimization options - roughly all -fxxx options.
+
+The key one among them being -fno-omit-frame-pointer and thus leading to
+not present frame pointer - frame pointer which the kernel needs.
+
+The next attempt to prevent compilers from tail-call optimizing
+the last function call cpu_startup_entry(), shy of carving out
+start_secondary() into a separate compilation unit and building it with
+-fno-stack-protector, is this one.
+
+The current solution is short and sweet, and reportedly, is supported by
+both compilers so let's see how far we'll get this time.
+
+Reported-by: Sergei Trofimovich <slyfox@gentoo.org>
+Signed-off-by: Borislav Petkov <bp@suse.de>
+Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
+Reviewed-by: Kees Cook <keescook@chromium.org>
+Link: https://lkml.kernel.org/r/20200314164451.346497-1-slyfox@gentoo.org
+---
+ arch/x86/include/asm/stackprotector.h | 7 ++++++-
+ arch/x86/kernel/smpboot.c             | 8 ++++++++
+ arch/x86/xen/smp_pv.c                 | 1 +
+ include/linux/compiler.h              | 6 ++++++
+ 4 files changed, 21 insertions(+), 1 deletion(-)
+
+diff --git a/arch/x86/include/asm/stackprotector.h b/arch/x86/include/asm/stackprotector.h
+index 91e29b6a86a5..9804a7957f4e 100644
+--- a/arch/x86/include/asm/stackprotector.h
++++ b/arch/x86/include/asm/stackprotector.h
+@@ -55,8 +55,13 @@
+ /*
+  * Initialize the stackprotector canary value.
+  *
+- * NOTE: this must only be called from functions that never return,
++ * NOTE: this must only be called from functions that never return
+  * and it must always be inlined.
++ *
++ * In addition, it should be called from a compilation unit for which
++ * stack protector is disabled. Alternatively, the caller should not end
++ * with a function call which gets tail-call optimized as that would
++ * lead to checking a modified canary value.
+  */
+ static __always_inline void boot_init_stack_canary(void)
+ {
+diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
+index fe3ab9632f3b..4f275ac7830b 100644
+--- a/arch/x86/kernel/smpboot.c
++++ b/arch/x86/kernel/smpboot.c
+@@ -266,6 +266,14 @@ static void notrace start_secondary(void *unused)
+ 
+ 	wmb();
+ 	cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
++
++	/*
++	 * Prevent tail call to cpu_startup_entry() because the stack protector
++	 * guard has been changed a couple of function calls up, in
++	 * boot_init_stack_canary() and must not be checked before tail calling
++	 * another function.
++	 */
++	prevent_tail_call_optimization();
+ }
+ 
+ /**
+diff --git a/arch/x86/xen/smp_pv.c b/arch/x86/xen/smp_pv.c
+index 8fb8a50a28b4..f2adb63b2d7c 100644
+--- a/arch/x86/xen/smp_pv.c
++++ b/arch/x86/xen/smp_pv.c
+@@ -93,6 +93,7 @@ asmlinkage __visible void cpu_bringup_and_idle(void)
+ 	cpu_bringup();
+ 	boot_init_stack_canary();
+ 	cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
++	prevent_tail_call_optimization();
+ }
+ 
+ void xen_smp_intr_free_pv(unsigned int cpu)
+diff --git a/include/linux/compiler.h b/include/linux/compiler.h
+index 034b0a644efc..732754d96039 100644
+--- a/include/linux/compiler.h
++++ b/include/linux/compiler.h
+@@ -356,4 +356,10 @@ static inline void *offset_to_ptr(const int *off)
+ /* &a[0] degrades to a pointer: a different type from an array */
+ #define __must_be_array(a)	BUILD_BUG_ON_ZERO(__same_type((a), &(a)[0]))
+ 
++/*
++ * This is needed in functions which generate the stack canary, see
++ * arch/x86/kernel/smpboot.c::start_secondary() for an example.
++ */
++#define prevent_tail_call_optimization()	asm("")
++
+ #endif /* __LINUX_COMPILER_H */
+-- 
+cgit 1.2-0.3.lf.el7
+


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [gentoo-commits] proj/linux-patches:5.6 commit in: /
@ 2020-05-11 22:46 Mike Pagano
  0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2020-05-11 22:46 UTC (permalink / raw
  To: gentoo-commits

commit:     ca149b605a796e49d6fa6b4c264c93712590bfd5
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Mon May 11 22:46:12 2020 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Mon May 11 22:46:12 2020 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ca149b60

Linux patch 5.6.12

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1011_linux-5.6.12.patch | 1575 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1579 insertions(+)

diff --git a/0000_README b/0000_README
index 9c9c8b5..dcfb651 100644
--- a/0000_README
+++ b/0000_README
@@ -87,6 +87,10 @@ Patch:  1010_linux-5.6.11.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.6.11
 
+Patch:  1011_linux-5.6.12.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.6.12
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1011_linux-5.6.12.patch b/1011_linux-5.6.12.patch
new file mode 100644
index 0000000..d32b884
--- /dev/null
+++ b/1011_linux-5.6.12.patch
@@ -0,0 +1,1575 @@
+diff --git a/Makefile b/Makefile
+index 5dedd6f9ad75..97e4c4d9ac95 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 6
+-SUBLEVEL = 11
++SUBLEVEL = 12
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+ 
+diff --git a/arch/x86/kvm/vmx/ops.h b/arch/x86/kvm/vmx/ops.h
+index 09b0937d56b1..19717d0a1100 100644
+--- a/arch/x86/kvm/vmx/ops.h
++++ b/arch/x86/kvm/vmx/ops.h
+@@ -12,6 +12,7 @@
+ 
+ #define __ex(x) __kvm_handle_fault_on_reboot(x)
+ 
++asmlinkage void vmread_error(unsigned long field, bool fault);
+ __attribute__((regparm(0))) void vmread_error_trampoline(unsigned long field,
+ 							 bool fault);
+ void vmwrite_error(unsigned long field, unsigned long value);
+diff --git a/drivers/acpi/sleep.c b/drivers/acpi/sleep.c
+index f4dbdfafafe3..4edc8a3ce40f 100644
+--- a/drivers/acpi/sleep.c
++++ b/drivers/acpi/sleep.c
+@@ -982,10 +982,7 @@ static int acpi_s2idle_prepare_late(void)
+ 
+ static void acpi_s2idle_sync(void)
+ {
+-	/*
+-	 * The EC driver uses the system workqueue and an additional special
+-	 * one, so those need to be flushed too.
+-	 */
++	/* The EC driver uses special workqueues that need to be flushed. */
+ 	acpi_ec_flush_work();
+ 	acpi_os_wait_events_complete(); /* synchronize Notify handling */
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
+index b03b1eb7ba04..1ae174c3d160 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c
+@@ -91,7 +91,8 @@ void amdgpu_pm_acpi_event_handler(struct amdgpu_device *adev)
+ 			adev->pm.ac_power = true;
+ 		else
+ 			adev->pm.ac_power = false;
+-		if (adev->powerplay.pp_funcs->enable_bapm)
++		if (adev->powerplay.pp_funcs &&
++		    adev->powerplay.pp_funcs->enable_bapm)
+ 			amdgpu_dpm_enable_bapm(adev, adev->pm.ac_power);
+ 		mutex_unlock(&adev->pm.mutex);
+ 	}
+diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/processpptables.c b/drivers/gpu/drm/amd/powerplay/hwmgr/processpptables.c
+index 77c14671866c..719597c5d27d 100644
+--- a/drivers/gpu/drm/amd/powerplay/hwmgr/processpptables.c
++++ b/drivers/gpu/drm/amd/powerplay/hwmgr/processpptables.c
+@@ -984,6 +984,32 @@ static int init_thermal_controller(
+ 			struct pp_hwmgr *hwmgr,
+ 			const ATOM_PPLIB_POWERPLAYTABLE *powerplay_table)
+ {
++	hwmgr->thermal_controller.ucType =
++			powerplay_table->sThermalController.ucType;
++	hwmgr->thermal_controller.ucI2cLine =
++			powerplay_table->sThermalController.ucI2cLine;
++	hwmgr->thermal_controller.ucI2cAddress =
++			powerplay_table->sThermalController.ucI2cAddress;
++
++	hwmgr->thermal_controller.fanInfo.bNoFan =
++		(0 != (powerplay_table->sThermalController.ucFanParameters &
++			ATOM_PP_FANPARAMETERS_NOFAN));
++
++	hwmgr->thermal_controller.fanInfo.ucTachometerPulsesPerRevolution =
++		powerplay_table->sThermalController.ucFanParameters &
++		ATOM_PP_FANPARAMETERS_TACHOMETER_PULSES_PER_REVOLUTION_MASK;
++
++	hwmgr->thermal_controller.fanInfo.ulMinRPM
++		= powerplay_table->sThermalController.ucFanMinRPM * 100UL;
++	hwmgr->thermal_controller.fanInfo.ulMaxRPM
++		= powerplay_table->sThermalController.ucFanMaxRPM * 100UL;
++
++	set_hw_cap(hwmgr,
++		   ATOM_PP_THERMALCONTROLLER_NONE != hwmgr->thermal_controller.ucType,
++		   PHM_PlatformCaps_ThermalController);
++
++	hwmgr->thermal_controller.use_hw_fan_control = 1;
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/powerplay/renoir_ppt.c b/drivers/gpu/drm/amd/powerplay/renoir_ppt.c
+index f7a1ce37227c..4a52c310058d 100644
+--- a/drivers/gpu/drm/amd/powerplay/renoir_ppt.c
++++ b/drivers/gpu/drm/amd/powerplay/renoir_ppt.c
+@@ -889,12 +889,17 @@ static int renoir_read_sensor(struct smu_context *smu,
+ 
+ static bool renoir_is_dpm_running(struct smu_context *smu)
+ {
++	struct amdgpu_device *adev = smu->adev;
++
+ 	/*
+ 	 * Util now, the pmfw hasn't exported the interface of SMU
+ 	 * feature mask to APU SKU so just force on all the feature
+ 	 * at early initial stage.
+ 	 */
+-	return true;
++	if (adev->in_suspend)
++		return false;
++	else
++		return true;
+ 
+ }
+ 
+diff --git a/drivers/gpu/drm/bridge/analogix/analogix-anx6345.c b/drivers/gpu/drm/bridge/analogix/analogix-anx6345.c
+index 526507102c1e..8d32fea84c75 100644
+--- a/drivers/gpu/drm/bridge/analogix/analogix-anx6345.c
++++ b/drivers/gpu/drm/bridge/analogix/analogix-anx6345.c
+@@ -485,6 +485,9 @@ static int anx6345_get_modes(struct drm_connector *connector)
+ 
+ 	num_modes += drm_add_edid_modes(connector, anx6345->edid);
+ 
++	/* Driver currently supports only 6bpc */
++	connector->display_info.bpc = 6;
++
+ unlock:
+ 	if (power_off)
+ 		anx6345_poweroff(anx6345);
+diff --git a/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c b/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c
+index 6effe532f820..461eff94d276 100644
+--- a/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c
++++ b/drivers/gpu/drm/bridge/analogix/analogix_dp_core.c
+@@ -1636,8 +1636,7 @@ static ssize_t analogix_dpaux_transfer(struct drm_dp_aux *aux,
+ }
+ 
+ struct analogix_dp_device *
+-analogix_dp_bind(struct device *dev, struct drm_device *drm_dev,
+-		 struct analogix_dp_plat_data *plat_data)
++analogix_dp_probe(struct device *dev, struct analogix_dp_plat_data *plat_data)
+ {
+ 	struct platform_device *pdev = to_platform_device(dev);
+ 	struct analogix_dp_device *dp;
+@@ -1740,22 +1739,30 @@ analogix_dp_bind(struct device *dev, struct drm_device *drm_dev,
+ 					irq_flags, "analogix-dp", dp);
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "failed to request irq\n");
+-		goto err_disable_pm_runtime;
++		return ERR_PTR(ret);
+ 	}
+ 	disable_irq(dp->irq);
+ 
++	return dp;
++}
++EXPORT_SYMBOL_GPL(analogix_dp_probe);
++
++int analogix_dp_bind(struct analogix_dp_device *dp, struct drm_device *drm_dev)
++{
++	int ret;
++
+ 	dp->drm_dev = drm_dev;
+ 	dp->encoder = dp->plat_data->encoder;
+ 
+ 	dp->aux.name = "DP-AUX";
+ 	dp->aux.transfer = analogix_dpaux_transfer;
+-	dp->aux.dev = &pdev->dev;
++	dp->aux.dev = dp->dev;
+ 
+ 	ret = drm_dp_aux_register(&dp->aux);
+ 	if (ret)
+-		return ERR_PTR(ret);
++		return ret;
+ 
+-	pm_runtime_enable(dev);
++	pm_runtime_enable(dp->dev);
+ 
+ 	ret = analogix_dp_create_bridge(drm_dev, dp);
+ 	if (ret) {
+@@ -1763,13 +1770,12 @@ analogix_dp_bind(struct device *dev, struct drm_device *drm_dev,
+ 		goto err_disable_pm_runtime;
+ 	}
+ 
+-	return dp;
++	return 0;
+ 
+ err_disable_pm_runtime:
++	pm_runtime_disable(dp->dev);
+ 
+-	pm_runtime_disable(dev);
+-
+-	return ERR_PTR(ret);
++	return ret;
+ }
+ EXPORT_SYMBOL_GPL(analogix_dp_bind);
+ 
+@@ -1786,10 +1792,15 @@ void analogix_dp_unbind(struct analogix_dp_device *dp)
+ 
+ 	drm_dp_aux_unregister(&dp->aux);
+ 	pm_runtime_disable(dp->dev);
+-	clk_disable_unprepare(dp->clock);
+ }
+ EXPORT_SYMBOL_GPL(analogix_dp_unbind);
+ 
++void analogix_dp_remove(struct analogix_dp_device *dp)
++{
++	clk_disable_unprepare(dp->clock);
++}
++EXPORT_SYMBOL_GPL(analogix_dp_remove);
++
+ #ifdef CONFIG_PM
+ int analogix_dp_suspend(struct analogix_dp_device *dp)
+ {
+diff --git a/drivers/gpu/drm/exynos/exynos_dp.c b/drivers/gpu/drm/exynos/exynos_dp.c
+index 4785885c0f4f..065a1cb2a544 100644
+--- a/drivers/gpu/drm/exynos/exynos_dp.c
++++ b/drivers/gpu/drm/exynos/exynos_dp.c
+@@ -158,15 +158,8 @@ static int exynos_dp_bind(struct device *dev, struct device *master, void *data)
+ 	struct drm_device *drm_dev = data;
+ 	int ret;
+ 
+-	dp->dev = dev;
+ 	dp->drm_dev = drm_dev;
+ 
+-	dp->plat_data.dev_type = EXYNOS_DP;
+-	dp->plat_data.power_on_start = exynos_dp_poweron;
+-	dp->plat_data.power_off = exynos_dp_poweroff;
+-	dp->plat_data.attach = exynos_dp_bridge_attach;
+-	dp->plat_data.get_modes = exynos_dp_get_modes;
+-
+ 	if (!dp->plat_data.panel && !dp->ptn_bridge) {
+ 		ret = exynos_dp_dt_parse_panel(dp);
+ 		if (ret)
+@@ -184,13 +177,11 @@ static int exynos_dp_bind(struct device *dev, struct device *master, void *data)
+ 
+ 	dp->plat_data.encoder = encoder;
+ 
+-	dp->adp = analogix_dp_bind(dev, dp->drm_dev, &dp->plat_data);
+-	if (IS_ERR(dp->adp)) {
++	ret = analogix_dp_bind(dp->adp, dp->drm_dev);
++	if (ret)
+ 		dp->encoder.funcs->destroy(&dp->encoder);
+-		return PTR_ERR(dp->adp);
+-	}
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ static void exynos_dp_unbind(struct device *dev, struct device *master,
+@@ -221,6 +212,7 @@ static int exynos_dp_probe(struct platform_device *pdev)
+ 	if (!dp)
+ 		return -ENOMEM;
+ 
++	dp->dev = dev;
+ 	/*
+ 	 * We just use the drvdata until driver run into component
+ 	 * add function, and then we would set drvdata to null, so
+@@ -246,16 +238,29 @@ static int exynos_dp_probe(struct platform_device *pdev)
+ 
+ 	/* The remote port can be either a panel or a bridge */
+ 	dp->plat_data.panel = panel;
++	dp->plat_data.dev_type = EXYNOS_DP;
++	dp->plat_data.power_on_start = exynos_dp_poweron;
++	dp->plat_data.power_off = exynos_dp_poweroff;
++	dp->plat_data.attach = exynos_dp_bridge_attach;
++	dp->plat_data.get_modes = exynos_dp_get_modes;
+ 	dp->plat_data.skip_connector = !!bridge;
++
+ 	dp->ptn_bridge = bridge;
+ 
+ out:
++	dp->adp = analogix_dp_probe(dev, &dp->plat_data);
++	if (IS_ERR(dp->adp))
++		return PTR_ERR(dp->adp);
++
+ 	return component_add(&pdev->dev, &exynos_dp_ops);
+ }
+ 
+ static int exynos_dp_remove(struct platform_device *pdev)
+ {
++	struct exynos_dp_device *dp = platform_get_drvdata(pdev);
++
+ 	component_del(&pdev->dev, &exynos_dp_ops);
++	analogix_dp_remove(dp->adp);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/gpu/drm/rockchip/analogix_dp-rockchip.c b/drivers/gpu/drm/rockchip/analogix_dp-rockchip.c
+index f38f5e113c6b..ce98c08aa8b4 100644
+--- a/drivers/gpu/drm/rockchip/analogix_dp-rockchip.c
++++ b/drivers/gpu/drm/rockchip/analogix_dp-rockchip.c
+@@ -325,15 +325,9 @@ static int rockchip_dp_bind(struct device *dev, struct device *master,
+ 			    void *data)
+ {
+ 	struct rockchip_dp_device *dp = dev_get_drvdata(dev);
+-	const struct rockchip_dp_chip_data *dp_data;
+ 	struct drm_device *drm_dev = data;
+ 	int ret;
+ 
+-	dp_data = of_device_get_match_data(dev);
+-	if (!dp_data)
+-		return -ENODEV;
+-
+-	dp->data = dp_data;
+ 	dp->drm_dev = drm_dev;
+ 
+ 	ret = rockchip_dp_drm_create_encoder(dp);
+@@ -344,16 +338,9 @@ static int rockchip_dp_bind(struct device *dev, struct device *master,
+ 
+ 	dp->plat_data.encoder = &dp->encoder;
+ 
+-	dp->plat_data.dev_type = dp->data->chip_type;
+-	dp->plat_data.power_on_start = rockchip_dp_poweron_start;
+-	dp->plat_data.power_off = rockchip_dp_powerdown;
+-	dp->plat_data.get_modes = rockchip_dp_get_modes;
+-
+-	dp->adp = analogix_dp_bind(dev, dp->drm_dev, &dp->plat_data);
+-	if (IS_ERR(dp->adp)) {
+-		ret = PTR_ERR(dp->adp);
++	ret = analogix_dp_bind(dp->adp, drm_dev);
++	if (ret)
+ 		goto err_cleanup_encoder;
+-	}
+ 
+ 	return 0;
+ err_cleanup_encoder:
+@@ -368,8 +355,6 @@ static void rockchip_dp_unbind(struct device *dev, struct device *master,
+ 
+ 	analogix_dp_unbind(dp->adp);
+ 	dp->encoder.funcs->destroy(&dp->encoder);
+-
+-	dp->adp = ERR_PTR(-ENODEV);
+ }
+ 
+ static const struct component_ops rockchip_dp_component_ops = {
+@@ -380,10 +365,15 @@ static const struct component_ops rockchip_dp_component_ops = {
+ static int rockchip_dp_probe(struct platform_device *pdev)
+ {
+ 	struct device *dev = &pdev->dev;
++	const struct rockchip_dp_chip_data *dp_data;
+ 	struct drm_panel *panel = NULL;
+ 	struct rockchip_dp_device *dp;
+ 	int ret;
+ 
++	dp_data = of_device_get_match_data(dev);
++	if (!dp_data)
++		return -ENODEV;
++
+ 	ret = drm_of_find_panel_or_bridge(dev->of_node, 1, 0, &panel, NULL);
+ 	if (ret < 0)
+ 		return ret;
+@@ -394,7 +384,12 @@ static int rockchip_dp_probe(struct platform_device *pdev)
+ 
+ 	dp->dev = dev;
+ 	dp->adp = ERR_PTR(-ENODEV);
++	dp->data = dp_data;
+ 	dp->plat_data.panel = panel;
++	dp->plat_data.dev_type = dp->data->chip_type;
++	dp->plat_data.power_on_start = rockchip_dp_poweron_start;
++	dp->plat_data.power_off = rockchip_dp_powerdown;
++	dp->plat_data.get_modes = rockchip_dp_get_modes;
+ 
+ 	ret = rockchip_dp_of_probe(dp);
+ 	if (ret < 0)
+@@ -402,12 +397,19 @@ static int rockchip_dp_probe(struct platform_device *pdev)
+ 
+ 	platform_set_drvdata(pdev, dp);
+ 
++	dp->adp = analogix_dp_probe(dev, &dp->plat_data);
++	if (IS_ERR(dp->adp))
++		return PTR_ERR(dp->adp);
++
+ 	return component_add(dev, &rockchip_dp_component_ops);
+ }
+ 
+ static int rockchip_dp_remove(struct platform_device *pdev)
+ {
++	struct rockchip_dp_device *dp = platform_get_drvdata(pdev);
++
+ 	component_del(&pdev->dev, &rockchip_dp_component_ops);
++	analogix_dp_remove(dp->adp);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/ethernet/broadcom/bcmsysport.c b/drivers/net/ethernet/broadcom/bcmsysport.c
+index 15b31cddc054..2e4b4188659a 100644
+--- a/drivers/net/ethernet/broadcom/bcmsysport.c
++++ b/drivers/net/ethernet/broadcom/bcmsysport.c
+@@ -666,7 +666,8 @@ static struct sk_buff *bcm_sysport_rx_refill(struct bcm_sysport_priv *priv,
+ 	dma_addr_t mapping;
+ 
+ 	/* Allocate a new SKB for a new packet */
+-	skb = netdev_alloc_skb(priv->netdev, RX_BUF_LENGTH);
++	skb = __netdev_alloc_skb(priv->netdev, RX_BUF_LENGTH,
++				 GFP_ATOMIC | __GFP_NOWARN);
+ 	if (!skb) {
+ 		priv->mib.alloc_rx_buff_failed++;
+ 		netif_err(priv, rx_err, ndev, "SKB alloc failed\n");
+diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+index b7c0c20e1325..5fd1a9dfcfff 100644
+--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
++++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+@@ -1625,7 +1625,8 @@ static struct sk_buff *bcmgenet_rx_refill(struct bcmgenet_priv *priv,
+ 	dma_addr_t mapping;
+ 
+ 	/* Allocate a new Rx skb */
+-	skb = netdev_alloc_skb(priv->dev, priv->rx_buf_len + SKB_ALIGNMENT);
++	skb = __netdev_alloc_skb(priv->dev, priv->rx_buf_len + SKB_ALIGNMENT,
++				 GFP_ATOMIC | __GFP_NOWARN);
+ 	if (!skb) {
+ 		priv->mib.alloc_rx_buff_failed++;
+ 		netif_err(priv, rx_err, priv->dev,
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
+index fa32cd5b418e..70d41783329d 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
+@@ -291,16 +291,19 @@ static int socfpga_gen5_set_phy_mode(struct socfpga_dwmac *dwmac)
+ 	    phymode == PHY_INTERFACE_MODE_MII ||
+ 	    phymode == PHY_INTERFACE_MODE_GMII ||
+ 	    phymode == PHY_INTERFACE_MODE_SGMII) {
+-		ctrl |= SYSMGR_EMACGRP_CTRL_PTP_REF_CLK_MASK << (reg_shift / 2);
+ 		regmap_read(sys_mgr_base_addr, SYSMGR_FPGAGRP_MODULE_REG,
+ 			    &module);
+ 		module |= (SYSMGR_FPGAGRP_MODULE_EMAC << (reg_shift / 2));
+ 		regmap_write(sys_mgr_base_addr, SYSMGR_FPGAGRP_MODULE_REG,
+ 			     module);
+-	} else {
+-		ctrl &= ~(SYSMGR_EMACGRP_CTRL_PTP_REF_CLK_MASK << (reg_shift / 2));
+ 	}
+ 
++	if (dwmac->f2h_ptp_ref_clk)
++		ctrl |= SYSMGR_EMACGRP_CTRL_PTP_REF_CLK_MASK << (reg_shift / 2);
++	else
++		ctrl &= ~(SYSMGR_EMACGRP_CTRL_PTP_REF_CLK_MASK <<
++			  (reg_shift / 2));
++
+ 	regmap_write(sys_mgr_base_addr, reg_offset, ctrl);
+ 
+ 	/* Deassert reset for the phy configuration to be sampled by
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_hwtstamp.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_hwtstamp.c
+index 020159622559..e5d9007c8090 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_hwtstamp.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_hwtstamp.c
+@@ -26,12 +26,16 @@ static void config_sub_second_increment(void __iomem *ioaddr,
+ 	unsigned long data;
+ 	u32 reg_value;
+ 
+-	/* For GMAC3.x, 4.x versions, convert the ptp_clock to nano second
+-	 *	formula = (1/ptp_clock) * 1000000000
+-	 * where ptp_clock is 50MHz if fine method is used to update system
++	/* For GMAC3.x, 4.x versions, in "fine adjustement mode" set sub-second
++	 * increment to twice the number of nanoseconds of a clock cycle.
++	 * The calculation of the default_addend value by the caller will set it
++	 * to mid-range = 2^31 when the remainder of this division is zero,
++	 * which will make the accumulator overflow once every 2 ptp_clock
++	 * cycles, adding twice the number of nanoseconds of a clock cycle :
++	 * 2000000000ULL / ptp_clock.
+ 	 */
+ 	if (value & PTP_TCR_TSCFUPDT)
+-		data = (1000000000ULL / 50000000);
++		data = (2000000000ULL / ptp_clock);
+ 	else
+ 		data = (1000000000ULL / ptp_clock);
+ 
+diff --git a/drivers/net/phy/bcm84881.c b/drivers/net/phy/bcm84881.c
+index 14d55a77eb28..126011582928 100644
+--- a/drivers/net/phy/bcm84881.c
++++ b/drivers/net/phy/bcm84881.c
+@@ -174,9 +174,6 @@ static int bcm84881_read_status(struct phy_device *phydev)
+ 	if (phydev->autoneg == AUTONEG_ENABLE && !phydev->autoneg_complete)
+ 		phydev->link = false;
+ 
+-	if (!phydev->link)
+-		return 0;
+-
+ 	linkmode_zero(phydev->lp_advertising);
+ 	phydev->speed = SPEED_UNKNOWN;
+ 	phydev->duplex = DUPLEX_UNKNOWN;
+@@ -184,6 +181,9 @@ static int bcm84881_read_status(struct phy_device *phydev)
+ 	phydev->asym_pause = 0;
+ 	phydev->mdix = 0;
+ 
++	if (!phydev->link)
++		return 0;
++
+ 	if (phydev->autoneg_complete) {
+ 		val = genphy_c45_read_lpa(phydev);
+ 		if (val < 0)
+diff --git a/drivers/net/wimax/i2400m/usb-fw.c b/drivers/net/wimax/i2400m/usb-fw.c
+index 529ebca1e9e1..1f7709d24f35 100644
+--- a/drivers/net/wimax/i2400m/usb-fw.c
++++ b/drivers/net/wimax/i2400m/usb-fw.c
+@@ -354,6 +354,7 @@ out:
+ 		usb_autopm_put_interface(i2400mu->usb_iface);
+ 	d_fnend(8, dev, "(i2400m %p ack %p size %zu) = %ld\n",
+ 		i2400m, ack, ack_size, (long) result);
++	usb_put_urb(&notif_urb);
+ 	return result;
+ 
+ error_exceeded:
+diff --git a/drivers/platform/x86/gpd-pocket-fan.c b/drivers/platform/x86/gpd-pocket-fan.c
+index b471b86c28fe..5b516e4c2bfb 100644
+--- a/drivers/platform/x86/gpd-pocket-fan.c
++++ b/drivers/platform/x86/gpd-pocket-fan.c
+@@ -128,7 +128,7 @@ static int gpd_pocket_fan_probe(struct platform_device *pdev)
+ 
+ 	for (i = 0; i < ARRAY_SIZE(temp_limits); i++) {
+ 		if (temp_limits[i] < 20000 || temp_limits[i] > 90000) {
+-			dev_err(&pdev->dev, "Invalid temp-limit %d (must be between 40000 and 70000)\n",
++			dev_err(&pdev->dev, "Invalid temp-limit %d (must be between 20000 and 90000)\n",
+ 				temp_limits[i]);
+ 			temp_limits[0] = TEMP_LIMIT0_DEFAULT;
+ 			temp_limits[1] = TEMP_LIMIT1_DEFAULT;
+diff --git a/drivers/remoteproc/qcom_q6v5_mss.c b/drivers/remoteproc/qcom_q6v5_mss.c
+index 0b1d737b0e97..8844fc56c5f6 100644
+--- a/drivers/remoteproc/qcom_q6v5_mss.c
++++ b/drivers/remoteproc/qcom_q6v5_mss.c
+@@ -1607,7 +1607,7 @@ static int q6v5_probe(struct platform_device *pdev)
+ 	ret = of_property_read_string_index(pdev->dev.of_node, "firmware-name",
+ 					    1, &qproc->hexagon_mdt_image);
+ 	if (ret < 0 && ret != -EINVAL)
+-		return ret;
++		goto free_rproc;
+ 
+ 	platform_set_drvdata(pdev, qproc);
+ 
+diff --git a/drivers/scsi/sg.c b/drivers/scsi/sg.c
+index 9c0ee192f0f9..20472aaaf630 100644
+--- a/drivers/scsi/sg.c
++++ b/drivers/scsi/sg.c
+@@ -685,8 +685,10 @@ sg_write(struct file *filp, const char __user *buf, size_t count, loff_t * ppos)
+ 	hp->flags = input_size;	/* structure abuse ... */
+ 	hp->pack_id = old_hdr.pack_id;
+ 	hp->usr_ptr = NULL;
+-	if (copy_from_user(cmnd, buf, cmd_size))
++	if (copy_from_user(cmnd, buf, cmd_size)) {
++		sg_remove_request(sfp, srp);
+ 		return -EFAULT;
++	}
+ 	/*
+ 	 * SG_DXFER_TO_FROM_DEV is functionally equivalent to SG_DXFER_FROM_DEV,
+ 	 * but is is possible that the app intended SG_DXFER_TO_DEV, because there
+diff --git a/drivers/usb/dwc3/core.h b/drivers/usb/dwc3/core.h
+index 3ecc69c5b150..ce4acbf7fef9 100644
+--- a/drivers/usb/dwc3/core.h
++++ b/drivers/usb/dwc3/core.h
+@@ -310,6 +310,10 @@
+ #define DWC3_GTXFIFOSIZ_TXFDEF(n)	((n) & 0xffff)
+ #define DWC3_GTXFIFOSIZ_TXFSTADDR(n)	((n) & 0xffff0000)
+ 
++/* Global RX Fifo Size Register */
++#define DWC31_GRXFIFOSIZ_RXFDEP(n)	((n) & 0x7fff)	/* DWC_usb31 only */
++#define DWC3_GRXFIFOSIZ_RXFDEP(n)	((n) & 0xffff)
++
+ /* Global Event Size Registers */
+ #define DWC3_GEVNTSIZ_INTMASK		BIT(31)
+ #define DWC3_GEVNTSIZ_SIZE(n)		((n) & 0xffff)
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index c4be4631937a..bc1cf6d0412a 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -2223,7 +2223,6 @@ static int dwc3_gadget_init_in_endpoint(struct dwc3_ep *dep)
+ {
+ 	struct dwc3 *dwc = dep->dwc;
+ 	int mdwidth;
+-	int kbytes;
+ 	int size;
+ 
+ 	mdwidth = DWC3_MDWIDTH(dwc->hwparams.hwparams0);
+@@ -2239,17 +2238,17 @@ static int dwc3_gadget_init_in_endpoint(struct dwc3_ep *dep)
+ 	/* FIFO Depth is in MDWDITH bytes. Multiply */
+ 	size *= mdwidth;
+ 
+-	kbytes = size / 1024;
+-	if (kbytes == 0)
+-		kbytes = 1;
+-
+ 	/*
+-	 * FIFO sizes account an extra MDWIDTH * (kbytes + 1) bytes for
+-	 * internal overhead. We don't really know how these are used,
+-	 * but documentation say it exists.
++	 * To meet performance requirement, a minimum TxFIFO size of 3x
++	 * MaxPacketSize is recommended for endpoints that support burst and a
++	 * minimum TxFIFO size of 2x MaxPacketSize for endpoints that don't
++	 * support burst. Use those numbers and we can calculate the max packet
++	 * limit as below.
+ 	 */
+-	size -= mdwidth * (kbytes + 1);
+-	size /= kbytes;
++	if (dwc->maximum_speed >= USB_SPEED_SUPER)
++		size /= 3;
++	else
++		size /= 2;
+ 
+ 	usb_ep_set_maxpacket_limit(&dep->endpoint, size);
+ 
+@@ -2267,8 +2266,39 @@ static int dwc3_gadget_init_in_endpoint(struct dwc3_ep *dep)
+ static int dwc3_gadget_init_out_endpoint(struct dwc3_ep *dep)
+ {
+ 	struct dwc3 *dwc = dep->dwc;
++	int mdwidth;
++	int size;
++
++	mdwidth = DWC3_MDWIDTH(dwc->hwparams.hwparams0);
++
++	/* MDWIDTH is represented in bits, convert to bytes */
++	mdwidth /= 8;
+ 
+-	usb_ep_set_maxpacket_limit(&dep->endpoint, 1024);
++	/* All OUT endpoints share a single RxFIFO space */
++	size = dwc3_readl(dwc->regs, DWC3_GRXFIFOSIZ(0));
++	if (dwc3_is_usb31(dwc))
++		size = DWC31_GRXFIFOSIZ_RXFDEP(size);
++	else
++		size = DWC3_GRXFIFOSIZ_RXFDEP(size);
++
++	/* FIFO depth is in MDWDITH bytes */
++	size *= mdwidth;
++
++	/*
++	 * To meet performance requirement, a minimum recommended RxFIFO size
++	 * is defined as follow:
++	 * RxFIFO size >= (3 x MaxPacketSize) +
++	 * (3 x 8 bytes setup packets size) + (16 bytes clock crossing margin)
++	 *
++	 * Then calculate the max packet limit as below.
++	 */
++	size -= (3 * 8) + 16;
++	if (size < 0)
++		size = 0;
++	else
++		size /= 3;
++
++	usb_ep_set_maxpacket_limit(&dep->endpoint, size);
+ 	dep->endpoint.max_streams = 15;
+ 	dep->endpoint.ops = &dwc3_gadget_ep_ops;
+ 	list_add_tail(&dep->endpoint.ep_list,
+diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
+index c2d7d57e98cf..bb3f63386b47 100644
+--- a/drivers/vhost/vsock.c
++++ b/drivers/vhost/vsock.c
+@@ -543,6 +543,11 @@ static int vhost_vsock_start(struct vhost_vsock *vsock)
+ 		mutex_unlock(&vq->mutex);
+ 	}
+ 
++	/* Some packets may have been queued before the device was started,
++	 * let's kick the send worker to send them.
++	 */
++	vhost_work_queue(&vsock->dev, &vsock->send_pkt_work);
++
+ 	mutex_unlock(&vsock->dev.mutex);
+ 	return 0;
+ 
+diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
+index 4804d1df8c1c..9c614d6916c2 100644
+--- a/fs/cifs/connect.c
++++ b/fs/cifs/connect.c
+@@ -375,8 +375,10 @@ static int reconn_set_ipaddr(struct TCP_Server_Info *server)
+ 		return rc;
+ 	}
+ 
++	spin_lock(&cifs_tcp_ses_lock);
+ 	rc = cifs_convert_address((struct sockaddr *)&server->dstaddr, ipaddr,
+ 				  strlen(ipaddr));
++	spin_unlock(&cifs_tcp_ses_lock);
+ 	kfree(ipaddr);
+ 
+ 	return !rc ? -1 : 0;
+@@ -3417,6 +3419,10 @@ cifs_find_tcon(struct cifs_ses *ses, struct smb_vol *volume_info)
+ 	spin_lock(&cifs_tcp_ses_lock);
+ 	list_for_each(tmp, &ses->tcon_list) {
+ 		tcon = list_entry(tmp, struct cifs_tcon, tcon_list);
++#ifdef CONFIG_CIFS_DFS_UPCALL
++		if (tcon->dfs_path)
++			continue;
++#endif
+ 		if (!match_tcon(tcon, volume_info))
+ 			continue;
+ 		++tcon->tc_count;
+diff --git a/include/drm/bridge/analogix_dp.h b/include/drm/bridge/analogix_dp.h
+index 7aa2f93da49c..b0dcc07334a1 100644
+--- a/include/drm/bridge/analogix_dp.h
++++ b/include/drm/bridge/analogix_dp.h
+@@ -42,9 +42,10 @@ int analogix_dp_resume(struct analogix_dp_device *dp);
+ int analogix_dp_suspend(struct analogix_dp_device *dp);
+ 
+ struct analogix_dp_device *
+-analogix_dp_bind(struct device *dev, struct drm_device *drm_dev,
+-		 struct analogix_dp_plat_data *plat_data);
++analogix_dp_probe(struct device *dev, struct analogix_dp_plat_data *plat_data);
++int analogix_dp_bind(struct analogix_dp_device *dp, struct drm_device *drm_dev);
+ void analogix_dp_unbind(struct analogix_dp_device *dp);
++void analogix_dp_remove(struct analogix_dp_device *dp);
+ 
+ int analogix_dp_start_crc(struct drm_connector *connector);
+ int analogix_dp_stop_crc(struct drm_connector *connector);
+diff --git a/include/linux/ieee80211.h b/include/linux/ieee80211.h
+index 73c66a3a33ae..7f3486e32e5d 100644
+--- a/include/linux/ieee80211.h
++++ b/include/linux/ieee80211.h
+@@ -619,6 +619,15 @@ static inline bool ieee80211_is_qos_nullfunc(__le16 fc)
+ 	       cpu_to_le16(IEEE80211_FTYPE_DATA | IEEE80211_STYPE_QOS_NULLFUNC);
+ }
+ 
++/**
++ * ieee80211_is_any_nullfunc - check if frame is regular or QoS nullfunc frame
++ * @fc: frame control bytes in little-endian byteorder
++ */
++static inline bool ieee80211_is_any_nullfunc(__le16 fc)
++{
++	return (ieee80211_is_nullfunc(fc) || ieee80211_is_qos_nullfunc(fc));
++}
++
+ /**
+  * ieee80211_is_bufferable_mmpdu - check if frame is bufferable MMPDU
+  * @fc: frame control field in little-endian byteorder
+diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c
+index fd81c7de77a7..63089c70adbb 100644
+--- a/kernel/trace/ftrace.c
++++ b/kernel/trace/ftrace.c
+@@ -5155,6 +5155,7 @@ int unregister_ftrace_direct(unsigned long ip, unsigned long addr)
+ 			list_del_rcu(&direct->next);
+ 			synchronize_rcu_tasks();
+ 			kfree(direct);
++			kfree(entry);
+ 			ftrace_direct_func_count--;
+ 		}
+ 	}
+diff --git a/kernel/trace/trace_events_hist.c b/kernel/trace/trace_events_hist.c
+index 5f6834a2bf41..fcab11cc6833 100644
+--- a/kernel/trace/trace_events_hist.c
++++ b/kernel/trace/trace_events_hist.c
+@@ -3320,6 +3320,9 @@ static void __destroy_hist_field(struct hist_field *hist_field)
+ 	kfree(hist_field->name);
+ 	kfree(hist_field->type);
+ 
++	kfree(hist_field->system);
++	kfree(hist_field->event_name);
++
+ 	kfree(hist_field);
+ }
+ 
+@@ -4382,6 +4385,7 @@ static struct hist_field *create_var(struct hist_trigger_data *hist_data,
+ 		goto out;
+ 	}
+ 
++	var->ref = 1;
+ 	var->flags = HIST_FIELD_FL_VAR;
+ 	var->var.idx = idx;
+ 	var->var.hist_data = var->hist_data = hist_data;
+@@ -5011,6 +5015,9 @@ static void destroy_field_vars(struct hist_trigger_data *hist_data)
+ 
+ 	for (i = 0; i < hist_data->n_field_vars; i++)
+ 		destroy_field_var(hist_data->field_vars[i]);
++
++	for (i = 0; i < hist_data->n_save_vars; i++)
++		destroy_field_var(hist_data->save_vars[i]);
+ }
+ 
+ static void save_field_var(struct hist_trigger_data *hist_data,
+diff --git a/lib/mpi/longlong.h b/lib/mpi/longlong.h
+index 2dceaca27489..891e1c3549c4 100644
+--- a/lib/mpi/longlong.h
++++ b/lib/mpi/longlong.h
+@@ -722,22 +722,22 @@ do {									\
+ do { \
+ 	if (__builtin_constant_p(bh) && (bh) == 0) \
+ 		__asm__ ("{a%I4|add%I4c} %1,%3,%4\n\t{aze|addze} %0,%2" \
+-		: "=r" ((USItype)(sh)), \
+-		"=&r" ((USItype)(sl)) \
++		: "=r" (sh), \
++		"=&r" (sl) \
+ 		: "%r" ((USItype)(ah)), \
+ 		"%r" ((USItype)(al)), \
+ 		"rI" ((USItype)(bl))); \
+ 	else if (__builtin_constant_p(bh) && (bh) == ~(USItype) 0) \
+ 		__asm__ ("{a%I4|add%I4c} %1,%3,%4\n\t{ame|addme} %0,%2" \
+-		: "=r" ((USItype)(sh)), \
+-		"=&r" ((USItype)(sl)) \
++		: "=r" (sh), \
++		"=&r" (sl) \
+ 		: "%r" ((USItype)(ah)), \
+ 		"%r" ((USItype)(al)), \
+ 		"rI" ((USItype)(bl))); \
+ 	else \
+ 		__asm__ ("{a%I5|add%I5c} %1,%4,%5\n\t{ae|adde} %0,%2,%3" \
+-		: "=r" ((USItype)(sh)), \
+-		"=&r" ((USItype)(sl)) \
++		: "=r" (sh), \
++		"=&r" (sl) \
+ 		: "%r" ((USItype)(ah)), \
+ 		"r" ((USItype)(bh)), \
+ 		"%r" ((USItype)(al)), \
+@@ -747,36 +747,36 @@ do { \
+ do { \
+ 	if (__builtin_constant_p(ah) && (ah) == 0) \
+ 		__asm__ ("{sf%I3|subf%I3c} %1,%4,%3\n\t{sfze|subfze} %0,%2" \
+-		: "=r" ((USItype)(sh)), \
+-		"=&r" ((USItype)(sl)) \
++		: "=r" (sh), \
++		"=&r" (sl) \
+ 		: "r" ((USItype)(bh)), \
+ 		"rI" ((USItype)(al)), \
+ 		"r" ((USItype)(bl))); \
+ 	else if (__builtin_constant_p(ah) && (ah) == ~(USItype) 0) \
+ 		__asm__ ("{sf%I3|subf%I3c} %1,%4,%3\n\t{sfme|subfme} %0,%2" \
+-		: "=r" ((USItype)(sh)), \
+-		"=&r" ((USItype)(sl)) \
++		: "=r" (sh), \
++		"=&r" (sl) \
+ 		: "r" ((USItype)(bh)), \
+ 		"rI" ((USItype)(al)), \
+ 		"r" ((USItype)(bl))); \
+ 	else if (__builtin_constant_p(bh) && (bh) == 0) \
+ 		__asm__ ("{sf%I3|subf%I3c} %1,%4,%3\n\t{ame|addme} %0,%2" \
+-		: "=r" ((USItype)(sh)), \
+-		"=&r" ((USItype)(sl)) \
++		: "=r" (sh), \
++		"=&r" (sl) \
+ 		: "r" ((USItype)(ah)), \
+ 		"rI" ((USItype)(al)), \
+ 		"r" ((USItype)(bl))); \
+ 	else if (__builtin_constant_p(bh) && (bh) == ~(USItype) 0) \
+ 		__asm__ ("{sf%I3|subf%I3c} %1,%4,%3\n\t{aze|addze} %0,%2" \
+-		: "=r" ((USItype)(sh)), \
+-		"=&r" ((USItype)(sl)) \
++		: "=r" (sh), \
++		"=&r" (sl) \
+ 		: "r" ((USItype)(ah)), \
+ 		"rI" ((USItype)(al)), \
+ 		"r" ((USItype)(bl))); \
+ 	else \
+ 		__asm__ ("{sf%I4|subf%I4c} %1,%5,%4\n\t{sfe|subfe} %0,%3,%2" \
+-		: "=r" ((USItype)(sh)), \
+-		"=&r" ((USItype)(sl)) \
++		: "=r" (sh), \
++		"=&r" (sl) \
+ 		: "r" ((USItype)(ah)), \
+ 		"r" ((USItype)(bh)), \
+ 		"rI" ((USItype)(al)), \
+@@ -787,7 +787,7 @@ do { \
+ do { \
+ 	USItype __m0 = (m0), __m1 = (m1); \
+ 	__asm__ ("mulhwu %0,%1,%2" \
+-	: "=r" ((USItype) ph) \
++	: "=r" (ph) \
+ 	: "%r" (__m0), \
+ 	"r" (__m1)); \
+ 	(pl) = __m0 * __m1; \
+diff --git a/mm/mremap.c b/mm/mremap.c
+index af363063ea23..d28f08a36b96 100644
+--- a/mm/mremap.c
++++ b/mm/mremap.c
+@@ -606,6 +606,16 @@ SYSCALL_DEFINE5(mremap, unsigned long, addr, unsigned long, old_len,
+ 	LIST_HEAD(uf_unmap_early);
+ 	LIST_HEAD(uf_unmap);
+ 
++	/*
++	 * There is a deliberate asymmetry here: we strip the pointer tag
++	 * from the old address but leave the new address alone. This is
++	 * for consistency with mmap(), where we prevent the creation of
++	 * aliasing mappings in userspace by leaving the tag bits of the
++	 * mapping address intact. A non-zero tag will cause the subsequent
++	 * range checks to reject the address as invalid.
++	 *
++	 * See Documentation/arm64/tagged-address-abi.rst for more information.
++	 */
+ 	addr = untagged_addr(addr);
+ 
+ 	if (flags & ~(MREMAP_FIXED | MREMAP_MAYMOVE))
+diff --git a/net/core/netclassid_cgroup.c b/net/core/netclassid_cgroup.c
+index b4c87fe31be2..41b24cd31562 100644
+--- a/net/core/netclassid_cgroup.c
++++ b/net/core/netclassid_cgroup.c
+@@ -127,10 +127,8 @@ static int write_classid(struct cgroup_subsys_state *css, struct cftype *cft,
+ 	cs->classid = (u32)value;
+ 
+ 	css_task_iter_start(css, 0, &it);
+-	while ((p = css_task_iter_next(&it))) {
++	while ((p = css_task_iter_next(&it)))
+ 		update_classid_task(p, cs->classid);
+-		cond_resched();
+-	}
+ 	css_task_iter_end(&it);
+ 
+ 	return 0;
+diff --git a/net/mac80211/mlme.c b/net/mac80211/mlme.c
+index 88d7a692a965..c21fbc6cc991 100644
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -2460,7 +2460,7 @@ void ieee80211_sta_tx_notify(struct ieee80211_sub_if_data *sdata,
+ 	if (!ieee80211_is_data(hdr->frame_control))
+ 	    return;
+ 
+-	if (ieee80211_is_nullfunc(hdr->frame_control) &&
++	if (ieee80211_is_any_nullfunc(hdr->frame_control) &&
+ 	    sdata->u.mgd.probe_send_count > 0) {
+ 		if (ack)
+ 			ieee80211_sta_reset_conn_monitor(sdata);
+diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c
+index 0ba98ad9bc85..69429c8df7b3 100644
+--- a/net/mac80211/rx.c
++++ b/net/mac80211/rx.c
+@@ -1450,8 +1450,7 @@ ieee80211_rx_h_check_dup(struct ieee80211_rx_data *rx)
+ 		return RX_CONTINUE;
+ 
+ 	if (ieee80211_is_ctl(hdr->frame_control) ||
+-	    ieee80211_is_nullfunc(hdr->frame_control) ||
+-	    ieee80211_is_qos_nullfunc(hdr->frame_control) ||
++	    ieee80211_is_any_nullfunc(hdr->frame_control) ||
+ 	    is_multicast_ether_addr(hdr->addr1))
+ 		return RX_CONTINUE;
+ 
+@@ -1838,8 +1837,7 @@ ieee80211_rx_h_sta_process(struct ieee80211_rx_data *rx)
+ 	 * Drop (qos-)data::nullfunc frames silently, since they
+ 	 * are used only to control station power saving mode.
+ 	 */
+-	if (ieee80211_is_nullfunc(hdr->frame_control) ||
+-	    ieee80211_is_qos_nullfunc(hdr->frame_control)) {
++	if (ieee80211_is_any_nullfunc(hdr->frame_control)) {
+ 		I802_DEBUG_INC(rx->local->rx_handlers_drop_nullfunc);
+ 
+ 		/*
+@@ -2319,7 +2317,7 @@ static int ieee80211_drop_unencrypted(struct ieee80211_rx_data *rx, __le16 fc)
+ 
+ 	/* Drop unencrypted frames if key is set. */
+ 	if (unlikely(!ieee80211_has_protected(fc) &&
+-		     !ieee80211_is_nullfunc(fc) &&
++		     !ieee80211_is_any_nullfunc(fc) &&
+ 		     ieee80211_is_data(fc) && rx->key))
+ 		return -EACCES;
+ 
+diff --git a/net/mac80211/sta_info.c b/net/mac80211/sta_info.c
+index e3572be307d6..149ed0510778 100644
+--- a/net/mac80211/sta_info.c
++++ b/net/mac80211/sta_info.c
+@@ -231,7 +231,8 @@ struct sta_info *sta_info_get_by_idx(struct ieee80211_sub_if_data *sdata,
+ 	struct sta_info *sta;
+ 	int i = 0;
+ 
+-	list_for_each_entry_rcu(sta, &local->sta_list, list) {
++	list_for_each_entry_rcu(sta, &local->sta_list, list,
++				lockdep_is_held(&local->sta_mtx)) {
+ 		if (sdata != sta->sdata)
+ 			continue;
+ 		if (i < idx) {
+diff --git a/net/mac80211/status.c b/net/mac80211/status.c
+index b720feaf9a74..2c2d78bcd78a 100644
+--- a/net/mac80211/status.c
++++ b/net/mac80211/status.c
+@@ -643,8 +643,7 @@ static void ieee80211_report_ack_skb(struct ieee80211_local *local,
+ 		rcu_read_lock();
+ 		sdata = ieee80211_sdata_from_skb(local, skb);
+ 		if (sdata) {
+-			if (ieee80211_is_nullfunc(hdr->frame_control) ||
+-			    ieee80211_is_qos_nullfunc(hdr->frame_control))
++			if (ieee80211_is_any_nullfunc(hdr->frame_control))
+ 				cfg80211_probe_status(sdata->dev, hdr->addr1,
+ 						      cookie, acked,
+ 						      info->status.ack_signal,
+@@ -1056,7 +1055,7 @@ static void __ieee80211_tx_status(struct ieee80211_hw *hw,
+ 			I802_DEBUG_INC(local->dot11FailedCount);
+ 	}
+ 
+-	if ((ieee80211_is_nullfunc(fc) || ieee80211_is_qos_nullfunc(fc)) &&
++	if (ieee80211_is_any_nullfunc(fc) &&
+ 	    ieee80211_has_pm(fc) &&
+ 	    ieee80211_hw_check(&local->hw, REPORTS_TX_ACK_STATUS) &&
+ 	    !(info->flags & IEEE80211_TX_CTL_INJECTED) &&
+diff --git a/net/mac80211/tx.c b/net/mac80211/tx.c
+index efe4c1fc68e5..a7b92d1feee1 100644
+--- a/net/mac80211/tx.c
++++ b/net/mac80211/tx.c
+@@ -297,7 +297,7 @@ ieee80211_tx_h_check_assoc(struct ieee80211_tx_data *tx)
+ 	if (unlikely(test_bit(SCAN_SW_SCANNING, &tx->local->scanning)) &&
+ 	    test_bit(SDATA_STATE_OFFCHANNEL, &tx->sdata->state) &&
+ 	    !ieee80211_is_probe_req(hdr->frame_control) &&
+-	    !ieee80211_is_nullfunc(hdr->frame_control))
++	    !ieee80211_is_any_nullfunc(hdr->frame_control))
+ 		/*
+ 		 * When software scanning only nullfunc frames (to notify
+ 		 * the sleep state to the AP) and probe requests (for the
+diff --git a/net/sctp/sm_make_chunk.c b/net/sctp/sm_make_chunk.c
+index 09050c1d5517..f7cb0b7faec2 100644
+--- a/net/sctp/sm_make_chunk.c
++++ b/net/sctp/sm_make_chunk.c
+@@ -858,7 +858,11 @@ struct sctp_chunk *sctp_make_shutdown(const struct sctp_association *asoc,
+ 	struct sctp_chunk *retval;
+ 	__u32 ctsn;
+ 
+-	ctsn = sctp_tsnmap_get_ctsn(&asoc->peer.tsn_map);
++	if (chunk && chunk->asoc)
++		ctsn = sctp_tsnmap_get_ctsn(&chunk->asoc->peer.tsn_map);
++	else
++		ctsn = sctp_tsnmap_get_ctsn(&asoc->peer.tsn_map);
++
+ 	shut.cum_tsn_ack = htonl(ctsn);
+ 
+ 	retval = sctp_make_control(asoc, SCTP_CID_SHUTDOWN, 0,
+diff --git a/net/sunrpc/cache.c b/net/sunrpc/cache.c
+index bd843a81afa0..d36cea4e270d 100644
+--- a/net/sunrpc/cache.c
++++ b/net/sunrpc/cache.c
+@@ -521,7 +521,6 @@ void cache_purge(struct cache_detail *detail)
+ {
+ 	struct cache_head *ch = NULL;
+ 	struct hlist_head *head = NULL;
+-	struct hlist_node *tmp = NULL;
+ 	int i = 0;
+ 
+ 	spin_lock(&detail->hash_lock);
+@@ -533,7 +532,9 @@ void cache_purge(struct cache_detail *detail)
+ 	dprintk("RPC: %d entries in %s cache\n", detail->entries, detail->name);
+ 	for (i = 0; i < detail->hash_size; i++) {
+ 		head = &detail->hash_table[i];
+-		hlist_for_each_entry_safe(ch, tmp, head, cache_list) {
++		while (!hlist_empty(head)) {
++			ch = hlist_entry(head->first, struct cache_head,
++					 cache_list);
+ 			sunrpc_begin_cache_remove_entry(ch, detail);
+ 			spin_unlock(&detail->hash_lock);
+ 			sunrpc_end_cache_remove_entry(ch, detail);
+diff --git a/scripts/config b/scripts/config
+index e0e39826dae9..eee5b7f3a092 100755
+--- a/scripts/config
++++ b/scripts/config
+@@ -7,6 +7,9 @@ myname=${0##*/}
+ # If no prefix forced, use the default CONFIG_
+ CONFIG_="${CONFIG_-CONFIG_}"
+ 
++# We use an uncommon delimiter for sed substitutions
++SED_DELIM=$(echo -en "\001")
++
+ usage() {
+ 	cat >&2 <<EOL
+ Manipulate options in a .config file from the command line.
+@@ -83,7 +86,7 @@ txt_subst() {
+ 	local infile="$3"
+ 	local tmpfile="$infile.swp"
+ 
+-	sed -e "s:$before:$after:" "$infile" >"$tmpfile"
++	sed -e "s$SED_DELIM$before$SED_DELIM$after$SED_DELIM" "$infile" >"$tmpfile"
+ 	# replace original file with the edited one
+ 	mv "$tmpfile" "$infile"
+ }
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 59b60b1f26f8..8b015b27e9c7 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -2075,9 +2075,10 @@ static void pcm_mmap_prepare(struct snd_pcm_substream *substream,
+  * some HD-audio PCI entries are exposed without any codecs, and such devices
+  * should be ignored from the beginning.
+  */
+-static const struct snd_pci_quirk driver_blacklist[] = {
+-	SND_PCI_QUIRK(0x1462, 0xcb59, "MSI TRX40 Creator", 0),
+-	SND_PCI_QUIRK(0x1462, 0xcb60, "MSI TRX40", 0),
++static const struct pci_device_id driver_blacklist[] = {
++	{ PCI_DEVICE_SUB(0x1022, 0x1487, 0x1043, 0x874f) }, /* ASUS ROG Zenith II / Strix */
++	{ PCI_DEVICE_SUB(0x1022, 0x1487, 0x1462, 0xcb59) }, /* MSI TRX40 Creator */
++	{ PCI_DEVICE_SUB(0x1022, 0x1487, 0x1462, 0xcb60) }, /* MSI TRX40 */
+ 	{}
+ };
+ 
+@@ -2097,7 +2098,7 @@ static int azx_probe(struct pci_dev *pci,
+ 	bool schedule_probe;
+ 	int err;
+ 
+-	if (snd_pci_quirk_lookup(pci, driver_blacklist)) {
++	if (pci_match_id(driver_blacklist, pci)) {
+ 		dev_info(&pci->dev, "Skipping the blacklisted device\n");
+ 		return -ENODEV;
+ 	}
+diff --git a/sound/soc/codecs/hdac_hdmi.c b/sound/soc/codecs/hdac_hdmi.c
+index e6558475e006..f0f689ddbefe 100644
+--- a/sound/soc/codecs/hdac_hdmi.c
++++ b/sound/soc/codecs/hdac_hdmi.c
+@@ -142,14 +142,14 @@ static struct hdac_hdmi_pcm *
+ hdac_hdmi_get_pcm_from_cvt(struct hdac_hdmi_priv *hdmi,
+ 			   struct hdac_hdmi_cvt *cvt)
+ {
+-	struct hdac_hdmi_pcm *pcm = NULL;
++	struct hdac_hdmi_pcm *pcm;
+ 
+ 	list_for_each_entry(pcm, &hdmi->pcm_list, head) {
+ 		if (pcm->cvt == cvt)
+-			break;
++			return pcm;
+ 	}
+ 
+-	return pcm;
++	return NULL;
+ }
+ 
+ static void hdac_hdmi_jack_report(struct hdac_hdmi_pcm *pcm,
+diff --git a/sound/soc/codecs/sgtl5000.c b/sound/soc/codecs/sgtl5000.c
+index d5130193b4a2..e8a8bf7b4ffe 100644
+--- a/sound/soc/codecs/sgtl5000.c
++++ b/sound/soc/codecs/sgtl5000.c
+@@ -1653,6 +1653,40 @@ static int sgtl5000_i2c_probe(struct i2c_client *client,
+ 		dev_err(&client->dev,
+ 			"Error %d initializing CHIP_CLK_CTRL\n", ret);
+ 
++	/* Mute everything to avoid pop from the following power-up */
++	ret = regmap_write(sgtl5000->regmap, SGTL5000_CHIP_ANA_CTRL,
++			   SGTL5000_CHIP_ANA_CTRL_DEFAULT);
++	if (ret) {
++		dev_err(&client->dev,
++			"Error %d muting outputs via CHIP_ANA_CTRL\n", ret);
++		goto disable_clk;
++	}
++
++	/*
++	 * If VAG is powered-on (e.g. from previous boot), it would be disabled
++	 * by the write to ANA_POWER in later steps of the probe code. This
++	 * may create a loud pop even with all outputs muted. The proper way
++	 * to circumvent this is disabling the bit first and waiting the proper
++	 * cool-down time.
++	 */
++	ret = regmap_read(sgtl5000->regmap, SGTL5000_CHIP_ANA_POWER, &value);
++	if (ret) {
++		dev_err(&client->dev, "Failed to read ANA_POWER: %d\n", ret);
++		goto disable_clk;
++	}
++	if (value & SGTL5000_VAG_POWERUP) {
++		ret = regmap_update_bits(sgtl5000->regmap,
++					 SGTL5000_CHIP_ANA_POWER,
++					 SGTL5000_VAG_POWERUP,
++					 0);
++		if (ret) {
++			dev_err(&client->dev, "Error %d disabling VAG\n", ret);
++			goto disable_clk;
++		}
++
++		msleep(SGTL5000_VAG_POWERDOWN_DELAY);
++	}
++
+ 	/* Follow section 2.2.1.1 of AN3663 */
+ 	ana_pwr = SGTL5000_ANA_POWER_DEFAULT;
+ 	if (sgtl5000->num_supplies <= VDDD) {
+diff --git a/sound/soc/codecs/sgtl5000.h b/sound/soc/codecs/sgtl5000.h
+index a4bf4bca95bf..56ec5863f250 100644
+--- a/sound/soc/codecs/sgtl5000.h
++++ b/sound/soc/codecs/sgtl5000.h
+@@ -233,6 +233,7 @@
+ /*
+  * SGTL5000_CHIP_ANA_CTRL
+  */
++#define SGTL5000_CHIP_ANA_CTRL_DEFAULT		0x0133
+ #define SGTL5000_LINE_OUT_MUTE			0x0100
+ #define SGTL5000_HP_SEL_MASK			0x0040
+ #define SGTL5000_HP_SEL_SHIFT			6
+diff --git a/sound/soc/sh/rcar/ssi.c b/sound/soc/sh/rcar/ssi.c
+index fc5d089868df..4a7d3413917f 100644
+--- a/sound/soc/sh/rcar/ssi.c
++++ b/sound/soc/sh/rcar/ssi.c
+@@ -594,10 +594,16 @@ static int rsnd_ssi_stop(struct rsnd_mod *mod,
+ 	 * Capture:  It might not receave data. Do nothing
+ 	 */
+ 	if (rsnd_io_is_play(io)) {
+-		rsnd_mod_write(mod, SSICR, cr | EN);
++		rsnd_mod_write(mod, SSICR, cr | ssi->cr_en);
+ 		rsnd_ssi_status_check(mod, DIRQ);
+ 	}
+ 
++	/* In multi-SSI mode, stop is performed by setting ssi0129 in
++	 * SSI_CONTROL to 0 (in rsnd_ssio_stop_gen2). Do nothing here.
++	 */
++	if (rsnd_ssi_multi_slaves_runtime(io))
++		return 0;
++
+ 	/*
+ 	 * disable SSI,
+ 	 * and, wait idle state
+@@ -737,6 +743,9 @@ static void rsnd_ssi_parent_attach(struct rsnd_mod *mod,
+ 	if (!rsnd_rdai_is_clk_master(rdai))
+ 		return;
+ 
++	if (rsnd_ssi_is_multi_slave(mod, io))
++		return;
++
+ 	switch (rsnd_mod_id(mod)) {
+ 	case 1:
+ 	case 2:
+diff --git a/sound/soc/sh/rcar/ssiu.c b/sound/soc/sh/rcar/ssiu.c
+index f35d88211887..9c7c3e7539c9 100644
+--- a/sound/soc/sh/rcar/ssiu.c
++++ b/sound/soc/sh/rcar/ssiu.c
+@@ -221,7 +221,7 @@ static int rsnd_ssiu_init_gen2(struct rsnd_mod *mod,
+ 			i;
+ 
+ 		for_each_rsnd_mod_array(i, pos, io, rsnd_ssi_array) {
+-			shift	= (i * 4) + 16;
++			shift	= (i * 4) + 20;
+ 			val	= (val & ~(0xF << shift)) |
+ 				rsnd_mod_id(pos) << shift;
+ 		}
+diff --git a/sound/soc/soc-topology.c b/sound/soc/soc-topology.c
+index a152409e8746..009d65a6fb43 100644
+--- a/sound/soc/soc-topology.c
++++ b/sound/soc/soc-topology.c
+@@ -894,7 +894,13 @@ static int soc_tplg_dmixer_create(struct soc_tplg *tplg, unsigned int count,
+ 		}
+ 
+ 		/* create any TLV data */
+-		soc_tplg_create_tlv(tplg, &kc, &mc->hdr);
++		err = soc_tplg_create_tlv(tplg, &kc, &mc->hdr);
++		if (err < 0) {
++			dev_err(tplg->dev, "ASoC: failed to create TLV %s\n",
++				mc->hdr.name);
++			kfree(sm);
++			continue;
++		}
+ 
+ 		/* pass control to driver for optional further init */
+ 		err = soc_tplg_init_kcontrol(tplg, &kc,
+@@ -1118,6 +1124,7 @@ static int soc_tplg_kcontrol_elems_load(struct soc_tplg *tplg,
+ 	struct snd_soc_tplg_hdr *hdr)
+ {
+ 	struct snd_soc_tplg_ctl_hdr *control_hdr;
++	int ret;
+ 	int i;
+ 
+ 	if (tplg->pass != SOC_TPLG_PASS_MIXER) {
+@@ -1146,25 +1153,30 @@ static int soc_tplg_kcontrol_elems_load(struct soc_tplg *tplg,
+ 		case SND_SOC_TPLG_CTL_RANGE:
+ 		case SND_SOC_TPLG_DAPM_CTL_VOLSW:
+ 		case SND_SOC_TPLG_DAPM_CTL_PIN:
+-			soc_tplg_dmixer_create(tplg, 1,
+-					       le32_to_cpu(hdr->payload_size));
++			ret = soc_tplg_dmixer_create(tplg, 1,
++					le32_to_cpu(hdr->payload_size));
+ 			break;
+ 		case SND_SOC_TPLG_CTL_ENUM:
+ 		case SND_SOC_TPLG_CTL_ENUM_VALUE:
+ 		case SND_SOC_TPLG_DAPM_CTL_ENUM_DOUBLE:
+ 		case SND_SOC_TPLG_DAPM_CTL_ENUM_VIRT:
+ 		case SND_SOC_TPLG_DAPM_CTL_ENUM_VALUE:
+-			soc_tplg_denum_create(tplg, 1,
+-					      le32_to_cpu(hdr->payload_size));
++			ret = soc_tplg_denum_create(tplg, 1,
++					le32_to_cpu(hdr->payload_size));
+ 			break;
+ 		case SND_SOC_TPLG_CTL_BYTES:
+-			soc_tplg_dbytes_create(tplg, 1,
+-					       le32_to_cpu(hdr->payload_size));
++			ret = soc_tplg_dbytes_create(tplg, 1,
++					le32_to_cpu(hdr->payload_size));
+ 			break;
+ 		default:
+ 			soc_bind_err(tplg, control_hdr, i);
+ 			return -EINVAL;
+ 		}
++		if (ret < 0) {
++			dev_err(tplg->dev, "ASoC: invalid control\n");
++			return ret;
++		}
++
+ 	}
+ 
+ 	return 0;
+@@ -1272,7 +1284,9 @@ static int soc_tplg_dapm_graph_elems_load(struct soc_tplg *tplg,
+ 		routes[i]->dobj.index = tplg->index;
+ 		list_add(&routes[i]->dobj.list, &tplg->comp->dobj_list);
+ 
+-		soc_tplg_add_route(tplg, routes[i]);
++		ret = soc_tplg_add_route(tplg, routes[i]);
++		if (ret < 0)
++			break;
+ 
+ 		/* add route, but keep going if some fail */
+ 		snd_soc_dapm_add_routes(dapm, routes[i], 1);
+@@ -1355,7 +1369,13 @@ static struct snd_kcontrol_new *soc_tplg_dapm_widget_dmixer_create(
+ 		}
+ 
+ 		/* create any TLV data */
+-		soc_tplg_create_tlv(tplg, &kc[i], &mc->hdr);
++		err = soc_tplg_create_tlv(tplg, &kc[i], &mc->hdr);
++		if (err < 0) {
++			dev_err(tplg->dev, "ASoC: failed to create TLV %s\n",
++				mc->hdr.name);
++			kfree(sm);
++			continue;
++		}
+ 
+ 		/* pass control to driver for optional further init */
+ 		err = soc_tplg_init_kcontrol(tplg, &kc[i],
+@@ -1766,10 +1786,13 @@ static int soc_tplg_dapm_complete(struct soc_tplg *tplg)
+ 	return 0;
+ }
+ 
+-static void set_stream_info(struct snd_soc_pcm_stream *stream,
++static int set_stream_info(struct snd_soc_pcm_stream *stream,
+ 	struct snd_soc_tplg_stream_caps *caps)
+ {
+ 	stream->stream_name = kstrdup(caps->name, GFP_KERNEL);
++	if (!stream->stream_name)
++		return -ENOMEM;
++
+ 	stream->channels_min = le32_to_cpu(caps->channels_min);
+ 	stream->channels_max = le32_to_cpu(caps->channels_max);
+ 	stream->rates = le32_to_cpu(caps->rates);
+@@ -1777,6 +1800,8 @@ static void set_stream_info(struct snd_soc_pcm_stream *stream,
+ 	stream->rate_max = le32_to_cpu(caps->rate_max);
+ 	stream->formats = le64_to_cpu(caps->formats);
+ 	stream->sig_bits = le32_to_cpu(caps->sig_bits);
++
++	return 0;
+ }
+ 
+ static void set_dai_flags(struct snd_soc_dai_driver *dai_drv,
+@@ -1812,20 +1837,29 @@ static int soc_tplg_dai_create(struct soc_tplg *tplg,
+ 	if (dai_drv == NULL)
+ 		return -ENOMEM;
+ 
+-	if (strlen(pcm->dai_name))
++	if (strlen(pcm->dai_name)) {
+ 		dai_drv->name = kstrdup(pcm->dai_name, GFP_KERNEL);
++		if (!dai_drv->name) {
++			ret = -ENOMEM;
++			goto err;
++		}
++	}
+ 	dai_drv->id = le32_to_cpu(pcm->dai_id);
+ 
+ 	if (pcm->playback) {
+ 		stream = &dai_drv->playback;
+ 		caps = &pcm->caps[SND_SOC_TPLG_STREAM_PLAYBACK];
+-		set_stream_info(stream, caps);
++		ret = set_stream_info(stream, caps);
++		if (ret < 0)
++			goto err;
+ 	}
+ 
+ 	if (pcm->capture) {
+ 		stream = &dai_drv->capture;
+ 		caps = &pcm->caps[SND_SOC_TPLG_STREAM_CAPTURE];
+-		set_stream_info(stream, caps);
++		ret = set_stream_info(stream, caps);
++		if (ret < 0)
++			goto err;
+ 	}
+ 
+ 	if (pcm->compress)
+@@ -1835,11 +1869,7 @@ static int soc_tplg_dai_create(struct soc_tplg *tplg,
+ 	ret = soc_tplg_dai_load(tplg, dai_drv, pcm, NULL);
+ 	if (ret < 0) {
+ 		dev_err(tplg->comp->dev, "ASoC: DAI loading failed\n");
+-		kfree(dai_drv->playback.stream_name);
+-		kfree(dai_drv->capture.stream_name);
+-		kfree(dai_drv->name);
+-		kfree(dai_drv);
+-		return ret;
++		goto err;
+ 	}
+ 
+ 	dai_drv->dobj.index = tplg->index;
+@@ -1860,6 +1890,14 @@ static int soc_tplg_dai_create(struct soc_tplg *tplg,
+ 		return ret;
+ 	}
+ 
++	return 0;
++
++err:
++	kfree(dai_drv->playback.stream_name);
++	kfree(dai_drv->capture.stream_name);
++	kfree(dai_drv->name);
++	kfree(dai_drv);
++
+ 	return ret;
+ }
+ 
+@@ -1916,11 +1954,20 @@ static int soc_tplg_fe_link_create(struct soc_tplg *tplg,
+ 	if (strlen(pcm->pcm_name)) {
+ 		link->name = kstrdup(pcm->pcm_name, GFP_KERNEL);
+ 		link->stream_name = kstrdup(pcm->pcm_name, GFP_KERNEL);
++		if (!link->name || !link->stream_name) {
++			ret = -ENOMEM;
++			goto err;
++		}
+ 	}
+ 	link->id = le32_to_cpu(pcm->pcm_id);
+ 
+-	if (strlen(pcm->dai_name))
++	if (strlen(pcm->dai_name)) {
+ 		link->cpus->dai_name = kstrdup(pcm->dai_name, GFP_KERNEL);
++		if (!link->cpus->dai_name) {
++			ret = -ENOMEM;
++			goto err;
++		}
++	}
+ 
+ 	link->codecs->name = "snd-soc-dummy";
+ 	link->codecs->dai_name = "snd-soc-dummy-dai";
+@@ -2088,7 +2135,9 @@ static int soc_tplg_pcm_elems_load(struct soc_tplg *tplg,
+ 			_pcm = pcm;
+ 		} else {
+ 			abi_match = false;
+-			pcm_new_ver(tplg, pcm, &_pcm);
++			ret = pcm_new_ver(tplg, pcm, &_pcm);
++			if (ret < 0)
++				return ret;
+ 		}
+ 
+ 		/* create the FE DAIs and DAI links */
+@@ -2436,13 +2485,17 @@ static int soc_tplg_dai_config(struct soc_tplg *tplg,
+ 	if (d->playback) {
+ 		stream = &dai_drv->playback;
+ 		caps = &d->caps[SND_SOC_TPLG_STREAM_PLAYBACK];
+-		set_stream_info(stream, caps);
++		ret = set_stream_info(stream, caps);
++		if (ret < 0)
++			goto err;
+ 	}
+ 
+ 	if (d->capture) {
+ 		stream = &dai_drv->capture;
+ 		caps = &d->caps[SND_SOC_TPLG_STREAM_CAPTURE];
+-		set_stream_info(stream, caps);
++		ret = set_stream_info(stream, caps);
++		if (ret < 0)
++			goto err;
+ 	}
+ 
+ 	if (d->flag_mask)
+@@ -2454,10 +2507,15 @@ static int soc_tplg_dai_config(struct soc_tplg *tplg,
+ 	ret = soc_tplg_dai_load(tplg, dai_drv, NULL, dai);
+ 	if (ret < 0) {
+ 		dev_err(tplg->comp->dev, "ASoC: DAI loading failed\n");
+-		return ret;
++		goto err;
+ 	}
+ 
+ 	return 0;
++
++err:
++	kfree(dai_drv->playback.stream_name);
++	kfree(dai_drv->capture.stream_name);
++	return ret;
+ }
+ 
+ /* load physical DAI elements */
+@@ -2466,7 +2524,7 @@ static int soc_tplg_dai_elems_load(struct soc_tplg *tplg,
+ {
+ 	struct snd_soc_tplg_dai *dai;
+ 	int count;
+-	int i;
++	int i, ret;
+ 
+ 	count = le32_to_cpu(hdr->count);
+ 
+@@ -2481,7 +2539,12 @@ static int soc_tplg_dai_elems_load(struct soc_tplg *tplg,
+ 			return -EINVAL;
+ 		}
+ 
+-		soc_tplg_dai_config(tplg, dai);
++		ret = soc_tplg_dai_config(tplg, dai);
++		if (ret < 0) {
++			dev_err(tplg->dev, "ASoC: failed to configure DAI\n");
++			return ret;
++		}
++
+ 		tplg->pos += (sizeof(*dai) + le32_to_cpu(dai->priv.size));
+ 	}
+ 
+@@ -2589,7 +2652,7 @@ static int soc_valid_header(struct soc_tplg *tplg,
+ 	}
+ 
+ 	/* big endian firmware objects not supported atm */
+-	if (hdr->magic == SOC_TPLG_MAGIC_BIG_ENDIAN) {
++	if (le32_to_cpu(hdr->magic) == SOC_TPLG_MAGIC_BIG_ENDIAN) {
+ 		dev_err(tplg->dev,
+ 			"ASoC: pass %d big endian not supported header got %x at offset 0x%lx size 0x%zx.\n",
+ 			tplg->pass, hdr->magic,
+diff --git a/tools/bpf/runqslower/Makefile b/tools/bpf/runqslower/Makefile
+index 39edd68afa8e..8a6f82e56a24 100644
+--- a/tools/bpf/runqslower/Makefile
++++ b/tools/bpf/runqslower/Makefile
+@@ -8,7 +8,7 @@ BPFTOOL ?= $(DEFAULT_BPFTOOL)
+ LIBBPF_SRC := $(abspath ../../lib/bpf)
+ BPFOBJ := $(OUTPUT)/libbpf.a
+ BPF_INCLUDE := $(OUTPUT)
+-INCLUDES := -I$(BPF_INCLUDE) -I$(OUTPUT) -I$(abspath ../../lib)
++INCLUDES := -I$(OUTPUT) -I$(BPF_INCLUDE) -I$(abspath ../../lib)
+ CFLAGS := -g -Wall
+ 
+ # Try to detect best kernel BTF source
+diff --git a/tools/testing/selftests/ipc/msgque.c b/tools/testing/selftests/ipc/msgque.c
+index 4c156aeab6b8..5ec4d9e18806 100644
+--- a/tools/testing/selftests/ipc/msgque.c
++++ b/tools/testing/selftests/ipc/msgque.c
+@@ -137,7 +137,7 @@ int dump_queue(struct msgque_data *msgque)
+ 	for (kern_id = 0; kern_id < 256; kern_id++) {
+ 		ret = msgctl(kern_id, MSG_STAT, &ds);
+ 		if (ret < 0) {
+-			if (errno == -EINVAL)
++			if (errno == EINVAL)
+ 				continue;
+ 			printf("Failed to get stats for IPC queue with id %d\n",
+ 					kern_id);
+diff --git a/tools/testing/selftests/tpm2/test_smoke.sh b/tools/testing/selftests/tpm2/test_smoke.sh
+index b630c7b5950a..8155c2ea7ccb 100755
+--- a/tools/testing/selftests/tpm2/test_smoke.sh
++++ b/tools/testing/selftests/tpm2/test_smoke.sh
+@@ -1,17 +1,8 @@
+ #!/bin/bash
+ # SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause)
+-self.flags = flags
+ 
+-# Kselftest framework requirement - SKIP code is 4.
+-ksft_skip=4
+-
+-
+-if [ -f /dev/tpm0 ] ; then
+-	python -m unittest -v tpm2_tests.SmokeTest
+-	python -m unittest -v tpm2_tests.AsyncTest
+-else
+-	exit $ksft_skip
+-fi
++python -m unittest -v tpm2_tests.SmokeTest
++python -m unittest -v tpm2_tests.AsyncTest
+ 
+ CLEAR_CMD=$(which tpm2_clear)
+ if [ -n $CLEAR_CMD ]; then
+diff --git a/tools/testing/selftests/tpm2/test_space.sh b/tools/testing/selftests/tpm2/test_space.sh
+index 180b469c53b4..a6f5e346635e 100755
+--- a/tools/testing/selftests/tpm2/test_space.sh
++++ b/tools/testing/selftests/tpm2/test_space.sh
+@@ -1,11 +1,4 @@
+ #!/bin/bash
+ # SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause)
+ 
+-# Kselftest framework requirement - SKIP code is 4.
+-ksft_skip=4
+-
+-if [ -f /dev/tpmrm0 ] ; then
+-	python -m unittest -v tpm2_tests.SpaceTest
+-else
+-	exit $ksft_skip
+-fi
++python -m unittest -v tpm2_tests.SpaceTest
+diff --git a/tools/testing/selftests/vm/Makefile b/tools/testing/selftests/vm/Makefile
+index 7f9a8a8c31da..8074340c6b3a 100644
+--- a/tools/testing/selftests/vm/Makefile
++++ b/tools/testing/selftests/vm/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ # Makefile for vm selftests
+ uname_M := $(shell uname -m 2>/dev/null || echo not)
+-ARCH ?= $(shell echo $(uname_M) | sed -e 's/aarch64.*/arm64/')
++MACHINE ?= $(shell echo $(uname_M) | sed -e 's/aarch64.*/arm64/')
+ 
+ CFLAGS = -Wall -I ../../../../usr/include $(EXTRA_CFLAGS)
+ LDLIBS = -lrt
+@@ -19,7 +19,7 @@ TEST_GEN_FILES += thuge-gen
+ TEST_GEN_FILES += transhuge-stress
+ TEST_GEN_FILES += userfaultfd
+ 
+-ifneq (,$(filter $(ARCH),arm64 ia64 mips64 parisc64 ppc64 riscv64 s390x sh64 sparc64 x86_64))
++ifneq (,$(filter $(MACHINE),arm64 ia64 mips64 parisc64 ppc64 ppc64le riscv64 s390x sh64 sparc64 x86_64))
+ TEST_GEN_FILES += va_128TBswitch
+ TEST_GEN_FILES += virtual_address_range
+ endif
+diff --git a/tools/testing/selftests/vm/run_vmtests b/tools/testing/selftests/vm/run_vmtests
+index f33714843198..6e137c9baa1e 100755
+--- a/tools/testing/selftests/vm/run_vmtests
++++ b/tools/testing/selftests/vm/run_vmtests
+@@ -59,7 +59,7 @@ else
+ fi
+ 
+ #filter 64bit architectures
+-ARCH64STR="arm64 ia64 mips64 parisc64 ppc64 riscv64 s390x sh64 sparc64 x86_64"
++ARCH64STR="arm64 ia64 mips64 parisc64 ppc64 ppc64le riscv64 s390x sh64 sparc64 x86_64"
+ if [ -z $ARCH ]; then
+   ARCH=`uname -m 2>/dev/null | sed -e 's/aarch64.*/arm64/'`
+ fi


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [gentoo-commits] proj/linux-patches:5.6 commit in: /
@ 2020-05-13 12:06 Mike Pagano
  0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2020-05-13 12:06 UTC (permalink / raw
  To: gentoo-commits

commit:     38f557ebf954d6b04056c57dce98ae9361cd866a
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed May 13 11:55:40 2020 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed May 13 12:05:55 2020 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=38f557eb

Add UTS_NS to GENTOO_LINUX_PORTAGE as required by portage since 2.3.99

Bug: https://bugs.gentoo.org/722772

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 4567_distro-Gentoo-Kconfig.patch | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/4567_distro-Gentoo-Kconfig.patch b/4567_distro-Gentoo-Kconfig.patch
index 581cb20..cb2eaa6 100644
--- a/4567_distro-Gentoo-Kconfig.patch
+++ b/4567_distro-Gentoo-Kconfig.patch
@@ -6,9 +6,9 @@
  source "Documentation/Kconfig"
 +
 +source "distro/Kconfig"
---- /dev/null	2020-04-15 02:49:37.900191585 -0400
-+++ b/distro/Kconfig	2020-04-15 11:07:10.952929540 -0400
-@@ -0,0 +1,156 @@
+--- /dev/null	2020-05-13 03:13:57.920193259 -0400
++++ b/distro/Kconfig	2020-05-13 07:51:36.841663359 -0400
+@@ -0,0 +1,157 @@
 +menu "Gentoo Linux"
 +
 +config GENTOO_LINUX
@@ -65,6 +65,7 @@
 +	select NET_NS
 +	select PID_NS
 +	select SYSVIPC
++	select UTS_NS
 +
 +	help
 +		This enables options required by various Portage FEATURES.


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [gentoo-commits] proj/linux-patches:5.6 commit in: /
@ 2020-05-13 16:48 Mike Pagano
  0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2020-05-13 16:48 UTC (permalink / raw
  To: gentoo-commits

commit:     bf9606c0bfb6e574748a7c1b61cce84eec1091d3
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed May 13 16:46:05 2020 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed May 13 16:46:05 2020 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=bf9606c0

VIDEO_TVP5150 requies REGMAP_I2C to build.  Select it by default.

Reported-By: Max Steel <M.Steel <AT> web.de>
Closes: https://bugs.gentoo.org/721096

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                                |  4 ++++
 2910_TVP5150-Fix-build-issue-by-selecting-REGMAP-I2C.patch | 10 ++++++++++
 2 files changed, 14 insertions(+)

diff --git a/0000_README b/0000_README
index dcfb651..f4994be 100644
--- a/0000_README
+++ b/0000_README
@@ -115,6 +115,10 @@ Patch:  2900_tmp513-Fix-build-issue-by-selecting-CONFIG_REG.patch
 From:   https://bugs.gentoo.org/710790
 Desc:   tmp513 requies REGMAP_I2C to build.  Select it by default in Kconfig. See bug #710790. Thanks to Phil Stracchino
 
+Patch:  2910_TVP5150-Fix-build-issue-by-selecting-REGMAP-I2C.patch
+From:   https://bugs.gentoo.org/721096
+Desc:   VIDEO_TVP515 requies REGMAP_I2C to build.  Select it by default in Kconfig. See bug #721096. Thanks to Max Steel
+
 Patch:  4567_distro-Gentoo-Kconfig.patch
 From:   Tom Wijsman <TomWij@gentoo.org>
 Desc:   Add Gentoo Linux support config settings and defaults.

diff --git a/2910_TVP5150-Fix-build-issue-by-selecting-REGMAP-I2C.patch b/2910_TVP5150-Fix-build-issue-by-selecting-REGMAP-I2C.patch
new file mode 100644
index 0000000..1bc058e
--- /dev/null
+++ b/2910_TVP5150-Fix-build-issue-by-selecting-REGMAP-I2C.patch
@@ -0,0 +1,10 @@
+--- a/drivers/media/i2c/Kconfig	2020-05-13 12:38:05.102903309 -0400
++++ b/drivers/media/i2c/Kconfig	2020-05-13 12:38:51.283171977 -0400
+@@ -378,6 +378,7 @@ config VIDEO_TVP514X
+ config VIDEO_TVP5150
+ 	tristate "Texas Instruments TVP5150 video decoder"
+ 	depends on VIDEO_V4L2 && I2C
++	select REGMAP_I2C
+ 	select V4L2_FWNODE
+ 	help
+ 	  Support for the Texas Instruments TVP5150 video decoder.


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [gentoo-commits] proj/linux-patches:5.6 commit in: /
@ 2020-05-14 11:34 Mike Pagano
  0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2020-05-14 11:34 UTC (permalink / raw
  To: gentoo-commits

commit:     ee77ed5cd54e726d06c811541727b80b2472cd96
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Thu May 14 11:34:11 2020 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Thu May 14 11:34:11 2020 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=ee77ed5c

Linux patch 5.6.13

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1012_linux-5.6.13.patch | 3958 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 3962 insertions(+)

diff --git a/0000_README b/0000_README
index f4994be..6a6ec25 100644
--- a/0000_README
+++ b/0000_README
@@ -91,6 +91,10 @@ Patch:  1011_linux-5.6.12.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.6.12
 
+Patch:  1012_linux-5.6.13.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.6.13
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1012_linux-5.6.13.patch b/1012_linux-5.6.13.patch
new file mode 100644
index 0000000..cf736d2
--- /dev/null
+++ b/1012_linux-5.6.13.patch
@@ -0,0 +1,3958 @@
+diff --git a/Makefile b/Makefile
+index 97e4c4d9ac95..d252219666fd 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 6
+-SUBLEVEL = 12
++SUBLEVEL = 13
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+ 
+diff --git a/arch/arm/crypto/chacha-glue.c b/arch/arm/crypto/chacha-glue.c
+index 6fdb0ac62b3d..59da6c0b63b6 100644
+--- a/arch/arm/crypto/chacha-glue.c
++++ b/arch/arm/crypto/chacha-glue.c
+@@ -91,9 +91,17 @@ void chacha_crypt_arch(u32 *state, u8 *dst, const u8 *src, unsigned int bytes,
+ 		return;
+ 	}
+ 
+-	kernel_neon_begin();
+-	chacha_doneon(state, dst, src, bytes, nrounds);
+-	kernel_neon_end();
++	do {
++		unsigned int todo = min_t(unsigned int, bytes, SZ_4K);
++
++		kernel_neon_begin();
++		chacha_doneon(state, dst, src, todo, nrounds);
++		kernel_neon_end();
++
++		bytes -= todo;
++		src += todo;
++		dst += todo;
++	} while (bytes);
+ }
+ EXPORT_SYMBOL(chacha_crypt_arch);
+ 
+diff --git a/arch/arm/crypto/nhpoly1305-neon-glue.c b/arch/arm/crypto/nhpoly1305-neon-glue.c
+index ae5aefc44a4d..ffa8d73fe722 100644
+--- a/arch/arm/crypto/nhpoly1305-neon-glue.c
++++ b/arch/arm/crypto/nhpoly1305-neon-glue.c
+@@ -30,7 +30,7 @@ static int nhpoly1305_neon_update(struct shash_desc *desc,
+ 		return crypto_nhpoly1305_update(desc, src, srclen);
+ 
+ 	do {
+-		unsigned int n = min_t(unsigned int, srclen, PAGE_SIZE);
++		unsigned int n = min_t(unsigned int, srclen, SZ_4K);
+ 
+ 		kernel_neon_begin();
+ 		crypto_nhpoly1305_update_helper(desc, src, n, _nh_neon);
+diff --git a/arch/arm/crypto/poly1305-glue.c b/arch/arm/crypto/poly1305-glue.c
+index ceec04ec2f40..13cfef4ae22e 100644
+--- a/arch/arm/crypto/poly1305-glue.c
++++ b/arch/arm/crypto/poly1305-glue.c
+@@ -160,13 +160,20 @@ void poly1305_update_arch(struct poly1305_desc_ctx *dctx, const u8 *src,
+ 		unsigned int len = round_down(nbytes, POLY1305_BLOCK_SIZE);
+ 
+ 		if (static_branch_likely(&have_neon) && do_neon) {
+-			kernel_neon_begin();
+-			poly1305_blocks_neon(&dctx->h, src, len, 1);
+-			kernel_neon_end();
++			do {
++				unsigned int todo = min_t(unsigned int, len, SZ_4K);
++
++				kernel_neon_begin();
++				poly1305_blocks_neon(&dctx->h, src, todo, 1);
++				kernel_neon_end();
++
++				len -= todo;
++				src += todo;
++			} while (len);
+ 		} else {
+ 			poly1305_blocks_arm(&dctx->h, src, len, 1);
++			src += len;
+ 		}
+-		src += len;
+ 		nbytes %= POLY1305_BLOCK_SIZE;
+ 	}
+ 
+diff --git a/arch/arm64/crypto/chacha-neon-glue.c b/arch/arm64/crypto/chacha-neon-glue.c
+index 37ca3e889848..af2bbca38e70 100644
+--- a/arch/arm64/crypto/chacha-neon-glue.c
++++ b/arch/arm64/crypto/chacha-neon-glue.c
+@@ -87,9 +87,17 @@ void chacha_crypt_arch(u32 *state, u8 *dst, const u8 *src, unsigned int bytes,
+ 	    !crypto_simd_usable())
+ 		return chacha_crypt_generic(state, dst, src, bytes, nrounds);
+ 
+-	kernel_neon_begin();
+-	chacha_doneon(state, dst, src, bytes, nrounds);
+-	kernel_neon_end();
++	do {
++		unsigned int todo = min_t(unsigned int, bytes, SZ_4K);
++
++		kernel_neon_begin();
++		chacha_doneon(state, dst, src, todo, nrounds);
++		kernel_neon_end();
++
++		bytes -= todo;
++		src += todo;
++		dst += todo;
++	} while (bytes);
+ }
+ EXPORT_SYMBOL(chacha_crypt_arch);
+ 
+diff --git a/arch/arm64/crypto/nhpoly1305-neon-glue.c b/arch/arm64/crypto/nhpoly1305-neon-glue.c
+index 895d3727c1fb..c5405e6a6db7 100644
+--- a/arch/arm64/crypto/nhpoly1305-neon-glue.c
++++ b/arch/arm64/crypto/nhpoly1305-neon-glue.c
+@@ -30,7 +30,7 @@ static int nhpoly1305_neon_update(struct shash_desc *desc,
+ 		return crypto_nhpoly1305_update(desc, src, srclen);
+ 
+ 	do {
+-		unsigned int n = min_t(unsigned int, srclen, PAGE_SIZE);
++		unsigned int n = min_t(unsigned int, srclen, SZ_4K);
+ 
+ 		kernel_neon_begin();
+ 		crypto_nhpoly1305_update_helper(desc, src, n, _nh_neon);
+diff --git a/arch/arm64/crypto/poly1305-glue.c b/arch/arm64/crypto/poly1305-glue.c
+index e97b092f56b8..f33ada70c4ed 100644
+--- a/arch/arm64/crypto/poly1305-glue.c
++++ b/arch/arm64/crypto/poly1305-glue.c
+@@ -143,13 +143,20 @@ void poly1305_update_arch(struct poly1305_desc_ctx *dctx, const u8 *src,
+ 		unsigned int len = round_down(nbytes, POLY1305_BLOCK_SIZE);
+ 
+ 		if (static_branch_likely(&have_neon) && crypto_simd_usable()) {
+-			kernel_neon_begin();
+-			poly1305_blocks_neon(&dctx->h, src, len, 1);
+-			kernel_neon_end();
++			do {
++				unsigned int todo = min_t(unsigned int, len, SZ_4K);
++
++				kernel_neon_begin();
++				poly1305_blocks_neon(&dctx->h, src, todo, 1);
++				kernel_neon_end();
++
++				len -= todo;
++				src += todo;
++			} while (len);
+ 		} else {
+ 			poly1305_blocks(&dctx->h, src, len, 1);
++			src += len;
+ 		}
+-		src += len;
+ 		nbytes %= POLY1305_BLOCK_SIZE;
+ 	}
+ 
+diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
+index 2bd92301d32f..6194cb3309d0 100644
+--- a/arch/arm64/kvm/guest.c
++++ b/arch/arm64/kvm/guest.c
+@@ -201,6 +201,13 @@ static int set_core_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg)
+ 	}
+ 
+ 	memcpy((u32 *)regs + off, valp, KVM_REG_SIZE(reg->id));
++
++	if (*vcpu_cpsr(vcpu) & PSR_MODE32_BIT) {
++		int i;
++
++		for (i = 0; i < 16; i++)
++			*vcpu_reg32(vcpu, i) = (u32)*vcpu_reg32(vcpu, i);
++	}
+ out:
+ 	return err;
+ }
+diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
+index bbeb6a5a6ba6..0be3355e3499 100644
+--- a/arch/arm64/mm/hugetlbpage.c
++++ b/arch/arm64/mm/hugetlbpage.c
+@@ -230,6 +230,8 @@ pte_t *huge_pte_alloc(struct mm_struct *mm,
+ 		ptep = (pte_t *)pudp;
+ 	} else if (sz == (CONT_PTE_SIZE)) {
+ 		pmdp = pmd_alloc(mm, pudp, addr);
++		if (!pmdp)
++			return NULL;
+ 
+ 		WARN_ON(addr & (sz - 1));
+ 		/*
+diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
+index fab855963c73..157924baa191 100644
+--- a/arch/riscv/mm/init.c
++++ b/arch/riscv/mm/init.c
+@@ -149,7 +149,8 @@ void __init setup_bootmem(void)
+ 	memblock_reserve(vmlinux_start, vmlinux_end - vmlinux_start);
+ 
+ 	set_max_mapnr(PFN_DOWN(mem_size));
+-	max_low_pfn = PFN_DOWN(memblock_end_of_DRAM());
++	max_pfn = PFN_DOWN(memblock_end_of_DRAM());
++	max_low_pfn = max_pfn;
+ 
+ #ifdef CONFIG_BLK_DEV_INITRD
+ 	setup_initrd();
+diff --git a/arch/s390/kvm/priv.c b/arch/s390/kvm/priv.c
+index ed52ffa8d5d4..560310e29e27 100644
+--- a/arch/s390/kvm/priv.c
++++ b/arch/s390/kvm/priv.c
+@@ -626,10 +626,12 @@ static int handle_pqap(struct kvm_vcpu *vcpu)
+ 	 * available for the guest are AQIC and TAPQ with the t bit set
+ 	 * since we do not set IC.3 (FIII) we currently will only intercept
+ 	 * the AQIC function code.
++	 * Note: running nested under z/VM can result in intercepts for other
++	 * function codes, e.g. PQAP(QCI). We do not support this and bail out.
+ 	 */
+ 	reg0 = vcpu->run->s.regs.gprs[0];
+ 	fc = (reg0 >> 24) & 0xff;
+-	if (WARN_ON_ONCE(fc != 0x03))
++	if (fc != 0x03)
+ 		return -EOPNOTSUPP;
+ 
+ 	/* PQAP instruction is allowed for guest kernel only */
+diff --git a/arch/x86/crypto/blake2s-glue.c b/arch/x86/crypto/blake2s-glue.c
+index 06ef2d4a4701..6737bcea1fa1 100644
+--- a/arch/x86/crypto/blake2s-glue.c
++++ b/arch/x86/crypto/blake2s-glue.c
+@@ -32,16 +32,16 @@ void blake2s_compress_arch(struct blake2s_state *state,
+ 			   const u32 inc)
+ {
+ 	/* SIMD disables preemption, so relax after processing each page. */
+-	BUILD_BUG_ON(PAGE_SIZE / BLAKE2S_BLOCK_SIZE < 8);
++	BUILD_BUG_ON(SZ_4K / BLAKE2S_BLOCK_SIZE < 8);
+ 
+ 	if (!static_branch_likely(&blake2s_use_ssse3) || !crypto_simd_usable()) {
+ 		blake2s_compress_generic(state, block, nblocks, inc);
+ 		return;
+ 	}
+ 
+-	for (;;) {
++	do {
+ 		const size_t blocks = min_t(size_t, nblocks,
+-					    PAGE_SIZE / BLAKE2S_BLOCK_SIZE);
++					    SZ_4K / BLAKE2S_BLOCK_SIZE);
+ 
+ 		kernel_fpu_begin();
+ 		if (IS_ENABLED(CONFIG_AS_AVX512) &&
+@@ -52,10 +52,8 @@ void blake2s_compress_arch(struct blake2s_state *state,
+ 		kernel_fpu_end();
+ 
+ 		nblocks -= blocks;
+-		if (!nblocks)
+-			break;
+ 		block += blocks * BLAKE2S_BLOCK_SIZE;
+-	}
++	} while (nblocks);
+ }
+ EXPORT_SYMBOL(blake2s_compress_arch);
+ 
+diff --git a/arch/x86/crypto/chacha_glue.c b/arch/x86/crypto/chacha_glue.c
+index 68a74953efaf..ebf2cd7ff2f0 100644
+--- a/arch/x86/crypto/chacha_glue.c
++++ b/arch/x86/crypto/chacha_glue.c
+@@ -154,9 +154,17 @@ void chacha_crypt_arch(u32 *state, u8 *dst, const u8 *src, unsigned int bytes,
+ 	    bytes <= CHACHA_BLOCK_SIZE)
+ 		return chacha_crypt_generic(state, dst, src, bytes, nrounds);
+ 
+-	kernel_fpu_begin();
+-	chacha_dosimd(state, dst, src, bytes, nrounds);
+-	kernel_fpu_end();
++	do {
++		unsigned int todo = min_t(unsigned int, bytes, SZ_4K);
++
++		kernel_fpu_begin();
++		chacha_dosimd(state, dst, src, todo, nrounds);
++		kernel_fpu_end();
++
++		bytes -= todo;
++		src += todo;
++		dst += todo;
++	} while (bytes);
+ }
+ EXPORT_SYMBOL(chacha_crypt_arch);
+ 
+diff --git a/arch/x86/crypto/nhpoly1305-avx2-glue.c b/arch/x86/crypto/nhpoly1305-avx2-glue.c
+index f7567cbd35b6..80fcb85736e1 100644
+--- a/arch/x86/crypto/nhpoly1305-avx2-glue.c
++++ b/arch/x86/crypto/nhpoly1305-avx2-glue.c
+@@ -29,7 +29,7 @@ static int nhpoly1305_avx2_update(struct shash_desc *desc,
+ 		return crypto_nhpoly1305_update(desc, src, srclen);
+ 
+ 	do {
+-		unsigned int n = min_t(unsigned int, srclen, PAGE_SIZE);
++		unsigned int n = min_t(unsigned int, srclen, SZ_4K);
+ 
+ 		kernel_fpu_begin();
+ 		crypto_nhpoly1305_update_helper(desc, src, n, _nh_avx2);
+diff --git a/arch/x86/crypto/nhpoly1305-sse2-glue.c b/arch/x86/crypto/nhpoly1305-sse2-glue.c
+index a661ede3b5cf..cc6b7c1a2705 100644
+--- a/arch/x86/crypto/nhpoly1305-sse2-glue.c
++++ b/arch/x86/crypto/nhpoly1305-sse2-glue.c
+@@ -29,7 +29,7 @@ static int nhpoly1305_sse2_update(struct shash_desc *desc,
+ 		return crypto_nhpoly1305_update(desc, src, srclen);
+ 
+ 	do {
+-		unsigned int n = min_t(unsigned int, srclen, PAGE_SIZE);
++		unsigned int n = min_t(unsigned int, srclen, SZ_4K);
+ 
+ 		kernel_fpu_begin();
+ 		crypto_nhpoly1305_update_helper(desc, src, n, _nh_sse2);
+diff --git a/arch/x86/crypto/poly1305_glue.c b/arch/x86/crypto/poly1305_glue.c
+index 79bb58737d52..61b2bc8b6986 100644
+--- a/arch/x86/crypto/poly1305_glue.c
++++ b/arch/x86/crypto/poly1305_glue.c
+@@ -91,8 +91,8 @@ static void poly1305_simd_blocks(void *ctx, const u8 *inp, size_t len,
+ 	struct poly1305_arch_internal *state = ctx;
+ 
+ 	/* SIMD disables preemption, so relax after processing each page. */
+-	BUILD_BUG_ON(PAGE_SIZE < POLY1305_BLOCK_SIZE ||
+-		     PAGE_SIZE % POLY1305_BLOCK_SIZE);
++	BUILD_BUG_ON(SZ_4K < POLY1305_BLOCK_SIZE ||
++		     SZ_4K % POLY1305_BLOCK_SIZE);
+ 
+ 	if (!IS_ENABLED(CONFIG_AS_AVX) || !static_branch_likely(&poly1305_use_avx) ||
+ 	    (len < (POLY1305_BLOCK_SIZE * 18) && !state->is_base2_26) ||
+@@ -102,8 +102,8 @@ static void poly1305_simd_blocks(void *ctx, const u8 *inp, size_t len,
+ 		return;
+ 	}
+ 
+-	for (;;) {
+-		const size_t bytes = min_t(size_t, len, PAGE_SIZE);
++	do {
++		const size_t bytes = min_t(size_t, len, SZ_4K);
+ 
+ 		kernel_fpu_begin();
+ 		if (IS_ENABLED(CONFIG_AS_AVX512) && static_branch_likely(&poly1305_use_avx512))
+@@ -113,11 +113,10 @@ static void poly1305_simd_blocks(void *ctx, const u8 *inp, size_t len,
+ 		else
+ 			poly1305_blocks_avx(ctx, inp, bytes, padbit);
+ 		kernel_fpu_end();
++
+ 		len -= bytes;
+-		if (!len)
+-			break;
+ 		inp += bytes;
+-	}
++	} while (len);
+ }
+ 
+ static void poly1305_simd_emit(void *ctx, u8 mac[POLY1305_DIGEST_SIZE],
+diff --git a/arch/x86/entry/calling.h b/arch/x86/entry/calling.h
+index 0789e13ece90..1c7f13bb6728 100644
+--- a/arch/x86/entry/calling.h
++++ b/arch/x86/entry/calling.h
+@@ -98,13 +98,6 @@ For 32-bit we have the following conventions - kernel is built with
+ #define SIZEOF_PTREGS	21*8
+ 
+ .macro PUSH_AND_CLEAR_REGS rdx=%rdx rax=%rax save_ret=0
+-	/*
+-	 * Push registers and sanitize registers of values that a
+-	 * speculation attack might otherwise want to exploit. The
+-	 * lower registers are likely clobbered well before they
+-	 * could be put to use in a speculative execution gadget.
+-	 * Interleave XOR with PUSH for better uop scheduling:
+-	 */
+ 	.if \save_ret
+ 	pushq	%rsi		/* pt_regs->si */
+ 	movq	8(%rsp), %rsi	/* temporarily store the return address in %rsi */
+@@ -114,34 +107,43 @@ For 32-bit we have the following conventions - kernel is built with
+ 	pushq   %rsi		/* pt_regs->si */
+ 	.endif
+ 	pushq	\rdx		/* pt_regs->dx */
+-	xorl	%edx, %edx	/* nospec   dx */
+ 	pushq   %rcx		/* pt_regs->cx */
+-	xorl	%ecx, %ecx	/* nospec   cx */
+ 	pushq   \rax		/* pt_regs->ax */
+ 	pushq   %r8		/* pt_regs->r8 */
+-	xorl	%r8d, %r8d	/* nospec   r8 */
+ 	pushq   %r9		/* pt_regs->r9 */
+-	xorl	%r9d, %r9d	/* nospec   r9 */
+ 	pushq   %r10		/* pt_regs->r10 */
+-	xorl	%r10d, %r10d	/* nospec   r10 */
+ 	pushq   %r11		/* pt_regs->r11 */
+-	xorl	%r11d, %r11d	/* nospec   r11*/
+ 	pushq	%rbx		/* pt_regs->rbx */
+-	xorl    %ebx, %ebx	/* nospec   rbx*/
+ 	pushq	%rbp		/* pt_regs->rbp */
+-	xorl    %ebp, %ebp	/* nospec   rbp*/
+ 	pushq	%r12		/* pt_regs->r12 */
+-	xorl	%r12d, %r12d	/* nospec   r12*/
+ 	pushq	%r13		/* pt_regs->r13 */
+-	xorl	%r13d, %r13d	/* nospec   r13*/
+ 	pushq	%r14		/* pt_regs->r14 */
+-	xorl	%r14d, %r14d	/* nospec   r14*/
+ 	pushq	%r15		/* pt_regs->r15 */
+-	xorl	%r15d, %r15d	/* nospec   r15*/
+ 	UNWIND_HINT_REGS
++
+ 	.if \save_ret
+ 	pushq	%rsi		/* return address on top of stack */
+ 	.endif
++
++	/*
++	 * Sanitize registers of values that a speculation attack might
++	 * otherwise want to exploit. The lower registers are likely clobbered
++	 * well before they could be put to use in a speculative execution
++	 * gadget.
++	 */
++	xorl	%edx,  %edx	/* nospec dx  */
++	xorl	%ecx,  %ecx	/* nospec cx  */
++	xorl	%r8d,  %r8d	/* nospec r8  */
++	xorl	%r9d,  %r9d	/* nospec r9  */
++	xorl	%r10d, %r10d	/* nospec r10 */
++	xorl	%r11d, %r11d	/* nospec r11 */
++	xorl	%ebx,  %ebx	/* nospec rbx */
++	xorl	%ebp,  %ebp	/* nospec rbp */
++	xorl	%r12d, %r12d	/* nospec r12 */
++	xorl	%r13d, %r13d	/* nospec r13 */
++	xorl	%r14d, %r14d	/* nospec r14 */
++	xorl	%r15d, %r15d	/* nospec r15 */
++
+ .endm
+ 
+ .macro POP_REGS pop_rdi=1 skip_r11rcx=0
+diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
+index f2bb91e87877..faa53fee0663 100644
+--- a/arch/x86/entry/entry_64.S
++++ b/arch/x86/entry/entry_64.S
+@@ -249,7 +249,6 @@ SYM_INNER_LABEL(entry_SYSCALL_64_after_hwframe, SYM_L_GLOBAL)
+ 	 */
+ syscall_return_via_sysret:
+ 	/* rcx and r11 are already restored (see code above) */
+-	UNWIND_HINT_EMPTY
+ 	POP_REGS pop_rdi=0 skip_r11rcx=1
+ 
+ 	/*
+@@ -258,6 +257,7 @@ syscall_return_via_sysret:
+ 	 */
+ 	movq	%rsp, %rdi
+ 	movq	PER_CPU_VAR(cpu_tss_rw + TSS_sp0), %rsp
++	UNWIND_HINT_EMPTY
+ 
+ 	pushq	RSP-RDI(%rdi)	/* RSP */
+ 	pushq	(%rdi)		/* RDI */
+@@ -279,8 +279,7 @@ SYM_CODE_END(entry_SYSCALL_64)
+  * %rdi: prev task
+  * %rsi: next task
+  */
+-SYM_CODE_START(__switch_to_asm)
+-	UNWIND_HINT_FUNC
++SYM_FUNC_START(__switch_to_asm)
+ 	/*
+ 	 * Save callee-saved registers
+ 	 * This must match the order in inactive_task_frame
+@@ -321,7 +320,7 @@ SYM_CODE_START(__switch_to_asm)
+ 	popq	%rbp
+ 
+ 	jmp	__switch_to
+-SYM_CODE_END(__switch_to_asm)
++SYM_FUNC_END(__switch_to_asm)
+ 
+ /*
+  * A newly forked process directly context switches into this address.
+@@ -512,7 +511,7 @@ SYM_CODE_END(spurious_entries_start)
+  * +----------------------------------------------------+
+  */
+ SYM_CODE_START(interrupt_entry)
+-	UNWIND_HINT_FUNC
++	UNWIND_HINT_IRET_REGS offset=16
+ 	ASM_CLAC
+ 	cld
+ 
+@@ -544,9 +543,9 @@ SYM_CODE_START(interrupt_entry)
+ 	pushq	5*8(%rdi)		/* regs->eflags */
+ 	pushq	4*8(%rdi)		/* regs->cs */
+ 	pushq	3*8(%rdi)		/* regs->ip */
++	UNWIND_HINT_IRET_REGS
+ 	pushq	2*8(%rdi)		/* regs->orig_ax */
+ 	pushq	8(%rdi)			/* return address */
+-	UNWIND_HINT_FUNC
+ 
+ 	movq	(%rdi), %rdi
+ 	jmp	2f
+@@ -637,6 +636,7 @@ SYM_INNER_LABEL(swapgs_restore_regs_and_return_to_usermode, SYM_L_GLOBAL)
+ 	 */
+ 	movq	%rsp, %rdi
+ 	movq	PER_CPU_VAR(cpu_tss_rw + TSS_sp0), %rsp
++	UNWIND_HINT_EMPTY
+ 
+ 	/* Copy the IRET frame to the trampoline stack. */
+ 	pushq	6*8(%rdi)	/* SS */
+@@ -1739,7 +1739,7 @@ SYM_CODE_START(rewind_stack_do_exit)
+ 
+ 	movq	PER_CPU_VAR(cpu_current_top_of_stack), %rax
+ 	leaq	-PTREGS_SIZE(%rax), %rsp
+-	UNWIND_HINT_FUNC sp_offset=PTREGS_SIZE
++	UNWIND_HINT_REGS
+ 
+ 	call	do_exit
+ SYM_CODE_END(rewind_stack_do_exit)
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index d79b40cd8283..7ba99c0759cf 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -1664,8 +1664,8 @@ void kvm_set_msi_irq(struct kvm *kvm, struct kvm_kernel_irq_routing_entry *e,
+ static inline bool kvm_irq_is_postable(struct kvm_lapic_irq *irq)
+ {
+ 	/* We can only post Fixed and LowPrio IRQs */
+-	return (irq->delivery_mode == dest_Fixed ||
+-		irq->delivery_mode == dest_LowestPrio);
++	return (irq->delivery_mode == APIC_DM_FIXED ||
++		irq->delivery_mode == APIC_DM_LOWEST);
+ }
+ 
+ static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu)
+diff --git a/arch/x86/include/asm/unwind.h b/arch/x86/include/asm/unwind.h
+index 499578f7e6d7..70fc159ebe69 100644
+--- a/arch/x86/include/asm/unwind.h
++++ b/arch/x86/include/asm/unwind.h
+@@ -19,7 +19,7 @@ struct unwind_state {
+ #if defined(CONFIG_UNWINDER_ORC)
+ 	bool signal, full_regs;
+ 	unsigned long sp, bp, ip;
+-	struct pt_regs *regs;
++	struct pt_regs *regs, *prev_regs;
+ #elif defined(CONFIG_UNWINDER_FRAME_POINTER)
+ 	bool got_irq;
+ 	unsigned long *bp, *orig_sp, ip;
+diff --git a/arch/x86/kernel/unwind_orc.c b/arch/x86/kernel/unwind_orc.c
+index e9cc182aa97e..80537dcbddef 100644
+--- a/arch/x86/kernel/unwind_orc.c
++++ b/arch/x86/kernel/unwind_orc.c
+@@ -142,9 +142,6 @@ static struct orc_entry *orc_find(unsigned long ip)
+ {
+ 	static struct orc_entry *orc;
+ 
+-	if (!orc_init)
+-		return NULL;
+-
+ 	if (ip == 0)
+ 		return &null_orc_entry;
+ 
+@@ -381,9 +378,38 @@ static bool deref_stack_iret_regs(struct unwind_state *state, unsigned long addr
+ 	return true;
+ }
+ 
++/*
++ * If state->regs is non-NULL, and points to a full pt_regs, just get the reg
++ * value from state->regs.
++ *
++ * Otherwise, if state->regs just points to IRET regs, and the previous frame
++ * had full regs, it's safe to get the value from the previous regs.  This can
++ * happen when early/late IRQ entry code gets interrupted by an NMI.
++ */
++static bool get_reg(struct unwind_state *state, unsigned int reg_off,
++		    unsigned long *val)
++{
++	unsigned int reg = reg_off/8;
++
++	if (!state->regs)
++		return false;
++
++	if (state->full_regs) {
++		*val = ((unsigned long *)state->regs)[reg];
++		return true;
++	}
++
++	if (state->prev_regs) {
++		*val = ((unsigned long *)state->prev_regs)[reg];
++		return true;
++	}
++
++	return false;
++}
++
+ bool unwind_next_frame(struct unwind_state *state)
+ {
+-	unsigned long ip_p, sp, orig_ip = state->ip, prev_sp = state->sp;
++	unsigned long ip_p, sp, tmp, orig_ip = state->ip, prev_sp = state->sp;
+ 	enum stack_type prev_type = state->stack_info.type;
+ 	struct orc_entry *orc;
+ 	bool indirect = false;
+@@ -445,39 +471,35 @@ bool unwind_next_frame(struct unwind_state *state)
+ 		break;
+ 
+ 	case ORC_REG_R10:
+-		if (!state->regs || !state->full_regs) {
++		if (!get_reg(state, offsetof(struct pt_regs, r10), &sp)) {
+ 			orc_warn("missing regs for base reg R10 at ip %pB\n",
+ 				 (void *)state->ip);
+ 			goto err;
+ 		}
+-		sp = state->regs->r10;
+ 		break;
+ 
+ 	case ORC_REG_R13:
+-		if (!state->regs || !state->full_regs) {
++		if (!get_reg(state, offsetof(struct pt_regs, r13), &sp)) {
+ 			orc_warn("missing regs for base reg R13 at ip %pB\n",
+ 				 (void *)state->ip);
+ 			goto err;
+ 		}
+-		sp = state->regs->r13;
+ 		break;
+ 
+ 	case ORC_REG_DI:
+-		if (!state->regs || !state->full_regs) {
++		if (!get_reg(state, offsetof(struct pt_regs, di), &sp)) {
+ 			orc_warn("missing regs for base reg DI at ip %pB\n",
+ 				 (void *)state->ip);
+ 			goto err;
+ 		}
+-		sp = state->regs->di;
+ 		break;
+ 
+ 	case ORC_REG_DX:
+-		if (!state->regs || !state->full_regs) {
++		if (!get_reg(state, offsetof(struct pt_regs, dx), &sp)) {
+ 			orc_warn("missing regs for base reg DX at ip %pB\n",
+ 				 (void *)state->ip);
+ 			goto err;
+ 		}
+-		sp = state->regs->dx;
+ 		break;
+ 
+ 	default:
+@@ -504,6 +526,7 @@ bool unwind_next_frame(struct unwind_state *state)
+ 
+ 		state->sp = sp;
+ 		state->regs = NULL;
++		state->prev_regs = NULL;
+ 		state->signal = false;
+ 		break;
+ 
+@@ -515,6 +538,7 @@ bool unwind_next_frame(struct unwind_state *state)
+ 		}
+ 
+ 		state->regs = (struct pt_regs *)sp;
++		state->prev_regs = NULL;
+ 		state->full_regs = true;
+ 		state->signal = true;
+ 		break;
+@@ -526,6 +550,8 @@ bool unwind_next_frame(struct unwind_state *state)
+ 			goto err;
+ 		}
+ 
++		if (state->full_regs)
++			state->prev_regs = state->regs;
+ 		state->regs = (void *)sp - IRET_FRAME_OFFSET;
+ 		state->full_regs = false;
+ 		state->signal = true;
+@@ -534,14 +560,14 @@ bool unwind_next_frame(struct unwind_state *state)
+ 	default:
+ 		orc_warn("unknown .orc_unwind entry type %d for ip %pB\n",
+ 			 orc->type, (void *)orig_ip);
+-		break;
++		goto err;
+ 	}
+ 
+ 	/* Find BP: */
+ 	switch (orc->bp_reg) {
+ 	case ORC_REG_UNDEFINED:
+-		if (state->regs && state->full_regs)
+-			state->bp = state->regs->bp;
++		if (get_reg(state, offsetof(struct pt_regs, bp), &tmp))
++			state->bp = tmp;
+ 		break;
+ 
+ 	case ORC_REG_PREV_SP:
+@@ -585,6 +611,9 @@ EXPORT_SYMBOL_GPL(unwind_next_frame);
+ void __unwind_start(struct unwind_state *state, struct task_struct *task,
+ 		    struct pt_regs *regs, unsigned long *first_frame)
+ {
++	if (!orc_init)
++		goto done;
++
+ 	memset(state, 0, sizeof(*state));
+ 	state->task = task;
+ 
+@@ -651,7 +680,7 @@ void __unwind_start(struct unwind_state *state, struct task_struct *task,
+ 	/* Otherwise, skip ahead to the user-specified starting frame: */
+ 	while (!unwind_done(state) &&
+ 	       (!on_stack(&state->stack_info, first_frame, sizeof(long)) ||
+-			state->sp <= (unsigned long)first_frame))
++			state->sp < (unsigned long)first_frame))
+ 		unwind_next_frame(state);
+ 
+ 	return;
+diff --git a/arch/x86/kvm/ioapic.c b/arch/x86/kvm/ioapic.c
+index 750ff0b29404..d057376bd3d3 100644
+--- a/arch/x86/kvm/ioapic.c
++++ b/arch/x86/kvm/ioapic.c
+@@ -225,12 +225,12 @@ static int ioapic_set_irq(struct kvm_ioapic *ioapic, unsigned int irq,
+ 	}
+ 
+ 	/*
+-	 * AMD SVM AVIC accelerate EOI write and do not trap,
+-	 * in-kernel IOAPIC will not be able to receive the EOI.
+-	 * In this case, we do lazy update of the pending EOI when
+-	 * trying to set IOAPIC irq.
++	 * AMD SVM AVIC accelerate EOI write iff the interrupt is edge
++	 * triggered, in which case the in-kernel IOAPIC will not be able
++	 * to receive the EOI.  In this case, we do a lazy update of the
++	 * pending EOI when trying to set IOAPIC irq.
+ 	 */
+-	if (kvm_apicv_activated(ioapic->kvm))
++	if (edge && kvm_apicv_activated(ioapic->kvm))
+ 		ioapic_lazy_update_eoi(ioapic, irq);
+ 
+ 	/*
+diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
+index 451377533bcb..c974c49221eb 100644
+--- a/arch/x86/kvm/svm.c
++++ b/arch/x86/kvm/svm.c
+@@ -1886,7 +1886,7 @@ static struct page **sev_pin_memory(struct kvm *kvm, unsigned long uaddr,
+ 		return NULL;
+ 
+ 	/* Pin the user virtual address. */
+-	npinned = get_user_pages_fast(uaddr, npages, FOLL_WRITE, pages);
++	npinned = get_user_pages_fast(uaddr, npages, write ? FOLL_WRITE : 0, pages);
+ 	if (npinned != npages) {
+ 		pr_err("SEV: Failure locking %lu pages.\n", npages);
+ 		goto err;
+diff --git a/arch/x86/kvm/vmx/vmenter.S b/arch/x86/kvm/vmx/vmenter.S
+index 861ae40e7144..99410f372c41 100644
+--- a/arch/x86/kvm/vmx/vmenter.S
++++ b/arch/x86/kvm/vmx/vmenter.S
+@@ -86,6 +86,9 @@ SYM_FUNC_START(vmx_vmexit)
+ 	/* IMPORTANT: Stuff the RSB immediately after VM-Exit, before RET! */
+ 	FILL_RETURN_BUFFER %_ASM_AX, RSB_CLEAR_LOOPS, X86_FEATURE_RETPOLINE
+ 
++	/* Clear RFLAGS.CF and RFLAGS.ZF to preserve VM-Exit, i.e. !VM-Fail. */
++	or $1, %_ASM_AX
++
+ 	pop %_ASM_AX
+ .Lvmexit_skip_rsb:
+ #endif
+diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
+index c4aedd00c1ba..7ab317e3184e 100644
+--- a/arch/x86/mm/pat/set_memory.c
++++ b/arch/x86/mm/pat/set_memory.c
+@@ -42,7 +42,8 @@ struct cpa_data {
+ 	unsigned long	pfn;
+ 	unsigned int	flags;
+ 	unsigned int	force_split		: 1,
+-			force_static_prot	: 1;
++			force_static_prot	: 1,
++			force_flush_all		: 1;
+ 	struct page	**pages;
+ };
+ 
+@@ -352,10 +353,10 @@ static void cpa_flush(struct cpa_data *data, int cache)
+ 		return;
+ 	}
+ 
+-	if (cpa->numpages <= tlb_single_page_flush_ceiling)
+-		on_each_cpu(__cpa_flush_tlb, cpa, 1);
+-	else
++	if (cpa->force_flush_all || cpa->numpages > tlb_single_page_flush_ceiling)
+ 		flush_tlb_all();
++	else
++		on_each_cpu(__cpa_flush_tlb, cpa, 1);
+ 
+ 	if (!cache)
+ 		return;
+@@ -1595,6 +1596,8 @@ static int cpa_process_alias(struct cpa_data *cpa)
+ 		alias_cpa.flags &= ~(CPA_PAGES_ARRAY | CPA_ARRAY);
+ 		alias_cpa.curpage = 0;
+ 
++		cpa->force_flush_all = 1;
++
+ 		ret = __change_page_attr_set_clr(&alias_cpa, 0);
+ 		if (ret)
+ 			return ret;
+@@ -1615,6 +1618,7 @@ static int cpa_process_alias(struct cpa_data *cpa)
+ 		alias_cpa.flags &= ~(CPA_PAGES_ARRAY | CPA_ARRAY);
+ 		alias_cpa.curpage = 0;
+ 
++		cpa->force_flush_all = 1;
+ 		/*
+ 		 * The high mapping range is imprecise, so ignore the
+ 		 * return value.
+diff --git a/block/blk-iocost.c b/block/blk-iocost.c
+index 2dc5dc54e257..d083f7704082 100644
+--- a/block/blk-iocost.c
++++ b/block/blk-iocost.c
+@@ -469,7 +469,7 @@ struct ioc_gq {
+ 	 */
+ 	atomic64_t			vtime;
+ 	atomic64_t			done_vtime;
+-	atomic64_t			abs_vdebt;
++	u64				abs_vdebt;
+ 	u64				last_vtime;
+ 
+ 	/*
+@@ -1145,7 +1145,7 @@ static void iocg_kick_waitq(struct ioc_gq *iocg, struct ioc_now *now)
+ 	struct iocg_wake_ctx ctx = { .iocg = iocg };
+ 	u64 margin_ns = (u64)(ioc->period_us *
+ 			      WAITQ_TIMER_MARGIN_PCT / 100) * NSEC_PER_USEC;
+-	u64 abs_vdebt, vdebt, vshortage, expires, oexpires;
++	u64 vdebt, vshortage, expires, oexpires;
+ 	s64 vbudget;
+ 	u32 hw_inuse;
+ 
+@@ -1155,18 +1155,15 @@ static void iocg_kick_waitq(struct ioc_gq *iocg, struct ioc_now *now)
+ 	vbudget = now->vnow - atomic64_read(&iocg->vtime);
+ 
+ 	/* pay off debt */
+-	abs_vdebt = atomic64_read(&iocg->abs_vdebt);
+-	vdebt = abs_cost_to_cost(abs_vdebt, hw_inuse);
++	vdebt = abs_cost_to_cost(iocg->abs_vdebt, hw_inuse);
+ 	if (vdebt && vbudget > 0) {
+ 		u64 delta = min_t(u64, vbudget, vdebt);
+ 		u64 abs_delta = min(cost_to_abs_cost(delta, hw_inuse),
+-				    abs_vdebt);
++				    iocg->abs_vdebt);
+ 
+ 		atomic64_add(delta, &iocg->vtime);
+ 		atomic64_add(delta, &iocg->done_vtime);
+-		atomic64_sub(abs_delta, &iocg->abs_vdebt);
+-		if (WARN_ON_ONCE(atomic64_read(&iocg->abs_vdebt) < 0))
+-			atomic64_set(&iocg->abs_vdebt, 0);
++		iocg->abs_vdebt -= abs_delta;
+ 	}
+ 
+ 	/*
+@@ -1222,12 +1219,18 @@ static bool iocg_kick_delay(struct ioc_gq *iocg, struct ioc_now *now, u64 cost)
+ 	u64 expires, oexpires;
+ 	u32 hw_inuse;
+ 
++	lockdep_assert_held(&iocg->waitq.lock);
++
+ 	/* debt-adjust vtime */
+ 	current_hweight(iocg, NULL, &hw_inuse);
+-	vtime += abs_cost_to_cost(atomic64_read(&iocg->abs_vdebt), hw_inuse);
++	vtime += abs_cost_to_cost(iocg->abs_vdebt, hw_inuse);
+ 
+-	/* clear or maintain depending on the overage */
+-	if (time_before_eq64(vtime, now->vnow)) {
++	/*
++	 * Clear or maintain depending on the overage. Non-zero vdebt is what
++	 * guarantees that @iocg is online and future iocg_kick_delay() will
++	 * clear use_delay. Don't leave it on when there's no vdebt.
++	 */
++	if (!iocg->abs_vdebt || time_before_eq64(vtime, now->vnow)) {
+ 		blkcg_clear_delay(blkg);
+ 		return false;
+ 	}
+@@ -1261,9 +1264,12 @@ static enum hrtimer_restart iocg_delay_timer_fn(struct hrtimer *timer)
+ {
+ 	struct ioc_gq *iocg = container_of(timer, struct ioc_gq, delay_timer);
+ 	struct ioc_now now;
++	unsigned long flags;
+ 
++	spin_lock_irqsave(&iocg->waitq.lock, flags);
+ 	ioc_now(iocg->ioc, &now);
+ 	iocg_kick_delay(iocg, &now, 0);
++	spin_unlock_irqrestore(&iocg->waitq.lock, flags);
+ 
+ 	return HRTIMER_NORESTART;
+ }
+@@ -1371,14 +1377,13 @@ static void ioc_timer_fn(struct timer_list *timer)
+ 	 * should have woken up in the last period and expire idle iocgs.
+ 	 */
+ 	list_for_each_entry_safe(iocg, tiocg, &ioc->active_iocgs, active_list) {
+-		if (!waitqueue_active(&iocg->waitq) &&
+-		    !atomic64_read(&iocg->abs_vdebt) && !iocg_is_idle(iocg))
++		if (!waitqueue_active(&iocg->waitq) && iocg->abs_vdebt &&
++		    !iocg_is_idle(iocg))
+ 			continue;
+ 
+ 		spin_lock(&iocg->waitq.lock);
+ 
+-		if (waitqueue_active(&iocg->waitq) ||
+-		    atomic64_read(&iocg->abs_vdebt)) {
++		if (waitqueue_active(&iocg->waitq) || iocg->abs_vdebt) {
+ 			/* might be oversleeping vtime / hweight changes, kick */
+ 			iocg_kick_waitq(iocg, &now);
+ 			iocg_kick_delay(iocg, &now, 0);
+@@ -1721,28 +1726,49 @@ static void ioc_rqos_throttle(struct rq_qos *rqos, struct bio *bio)
+ 	 * tests are racy but the races aren't systemic - we only miss once
+ 	 * in a while which is fine.
+ 	 */
+-	if (!waitqueue_active(&iocg->waitq) &&
+-	    !atomic64_read(&iocg->abs_vdebt) &&
++	if (!waitqueue_active(&iocg->waitq) && !iocg->abs_vdebt &&
+ 	    time_before_eq64(vtime + cost, now.vnow)) {
+ 		iocg_commit_bio(iocg, bio, cost);
+ 		return;
+ 	}
+ 
+ 	/*
+-	 * We're over budget.  If @bio has to be issued regardless,
+-	 * remember the abs_cost instead of advancing vtime.
+-	 * iocg_kick_waitq() will pay off the debt before waking more IOs.
++	 * We activated above but w/o any synchronization. Deactivation is
++	 * synchronized with waitq.lock and we won't get deactivated as long
++	 * as we're waiting or has debt, so we're good if we're activated
++	 * here. In the unlikely case that we aren't, just issue the IO.
++	 */
++	spin_lock_irq(&iocg->waitq.lock);
++
++	if (unlikely(list_empty(&iocg->active_list))) {
++		spin_unlock_irq(&iocg->waitq.lock);
++		iocg_commit_bio(iocg, bio, cost);
++		return;
++	}
++
++	/*
++	 * We're over budget. If @bio has to be issued regardless, remember
++	 * the abs_cost instead of advancing vtime. iocg_kick_waitq() will pay
++	 * off the debt before waking more IOs.
++	 *
+ 	 * This way, the debt is continuously paid off each period with the
+-	 * actual budget available to the cgroup.  If we just wound vtime,
+-	 * we would incorrectly use the current hw_inuse for the entire
+-	 * amount which, for example, can lead to the cgroup staying
+-	 * blocked for a long time even with substantially raised hw_inuse.
++	 * actual budget available to the cgroup. If we just wound vtime, we
++	 * would incorrectly use the current hw_inuse for the entire amount
++	 * which, for example, can lead to the cgroup staying blocked for a
++	 * long time even with substantially raised hw_inuse.
++	 *
++	 * An iocg with vdebt should stay online so that the timer can keep
++	 * deducting its vdebt and [de]activate use_delay mechanism
++	 * accordingly. We don't want to race against the timer trying to
++	 * clear them and leave @iocg inactive w/ dangling use_delay heavily
++	 * penalizing the cgroup and its descendants.
+ 	 */
+ 	if (bio_issue_as_root_blkg(bio) || fatal_signal_pending(current)) {
+-		atomic64_add(abs_cost, &iocg->abs_vdebt);
++		iocg->abs_vdebt += abs_cost;
+ 		if (iocg_kick_delay(iocg, &now, cost))
+ 			blkcg_schedule_throttle(rqos->q,
+ 					(bio->bi_opf & REQ_SWAP) == REQ_SWAP);
++		spin_unlock_irq(&iocg->waitq.lock);
+ 		return;
+ 	}
+ 
+@@ -1759,20 +1785,6 @@ static void ioc_rqos_throttle(struct rq_qos *rqos, struct bio *bio)
+ 	 * All waiters are on iocg->waitq and the wait states are
+ 	 * synchronized using waitq.lock.
+ 	 */
+-	spin_lock_irq(&iocg->waitq.lock);
+-
+-	/*
+-	 * We activated above but w/o any synchronization.  Deactivation is
+-	 * synchronized with waitq.lock and we won't get deactivated as
+-	 * long as we're waiting, so we're good if we're activated here.
+-	 * In the unlikely case that we are deactivated, just issue the IO.
+-	 */
+-	if (unlikely(list_empty(&iocg->active_list))) {
+-		spin_unlock_irq(&iocg->waitq.lock);
+-		iocg_commit_bio(iocg, bio, cost);
+-		return;
+-	}
+-
+ 	init_waitqueue_func_entry(&wait.wait, iocg_wake_fn);
+ 	wait.wait.private = current;
+ 	wait.bio = bio;
+@@ -1804,6 +1816,7 @@ static void ioc_rqos_merge(struct rq_qos *rqos, struct request *rq,
+ 	struct ioc_now now;
+ 	u32 hw_inuse;
+ 	u64 abs_cost, cost;
++	unsigned long flags;
+ 
+ 	/* bypass if disabled or for root cgroup */
+ 	if (!ioc->enabled || !iocg->level)
+@@ -1823,15 +1836,28 @@ static void ioc_rqos_merge(struct rq_qos *rqos, struct request *rq,
+ 		iocg->cursor = bio_end;
+ 
+ 	/*
+-	 * Charge if there's enough vtime budget and the existing request
+-	 * has cost assigned.  Otherwise, account it as debt.  See debt
+-	 * handling in ioc_rqos_throttle() for details.
++	 * Charge if there's enough vtime budget and the existing request has
++	 * cost assigned.
+ 	 */
+ 	if (rq->bio && rq->bio->bi_iocost_cost &&
+-	    time_before_eq64(atomic64_read(&iocg->vtime) + cost, now.vnow))
++	    time_before_eq64(atomic64_read(&iocg->vtime) + cost, now.vnow)) {
+ 		iocg_commit_bio(iocg, bio, cost);
+-	else
+-		atomic64_add(abs_cost, &iocg->abs_vdebt);
++		return;
++	}
++
++	/*
++	 * Otherwise, account it as debt if @iocg is online, which it should
++	 * be for the vast majority of cases. See debt handling in
++	 * ioc_rqos_throttle() for details.
++	 */
++	spin_lock_irqsave(&iocg->waitq.lock, flags);
++	if (likely(!list_empty(&iocg->active_list))) {
++		iocg->abs_vdebt += abs_cost;
++		iocg_kick_delay(iocg, &now, cost);
++	} else {
++		iocg_commit_bio(iocg, bio, cost);
++	}
++	spin_unlock_irqrestore(&iocg->waitq.lock, flags);
+ }
+ 
+ static void ioc_rqos_done_bio(struct rq_qos *rqos, struct bio *bio)
+@@ -2001,7 +2027,6 @@ static void ioc_pd_init(struct blkg_policy_data *pd)
+ 	iocg->ioc = ioc;
+ 	atomic64_set(&iocg->vtime, now.vnow);
+ 	atomic64_set(&iocg->done_vtime, now.vnow);
+-	atomic64_set(&iocg->abs_vdebt, 0);
+ 	atomic64_set(&iocg->active_period, atomic64_read(&ioc->cur_period));
+ 	INIT_LIST_HEAD(&iocg->active_list);
+ 	iocg->hweight_active = HWEIGHT_WHOLE;
+diff --git a/drivers/amba/bus.c b/drivers/amba/bus.c
+index fe1523664816..8558b629880b 100644
+--- a/drivers/amba/bus.c
++++ b/drivers/amba/bus.c
+@@ -645,6 +645,7 @@ static void amba_device_initialize(struct amba_device *dev, const char *name)
+ 	dev->dev.release = amba_device_release;
+ 	dev->dev.bus = &amba_bustype;
+ 	dev->dev.dma_mask = &dev->dev.coherent_dma_mask;
++	dev->dev.dma_parms = &dev->dma_parms;
+ 	dev->res.name = dev_name(&dev->dev);
+ }
+ 
+diff --git a/drivers/base/platform.c b/drivers/base/platform.c
+index b5ce7b085795..c81b68d5d66d 100644
+--- a/drivers/base/platform.c
++++ b/drivers/base/platform.c
+@@ -361,6 +361,8 @@ struct platform_object {
+  */
+ static void setup_pdev_dma_masks(struct platform_device *pdev)
+ {
++	pdev->dev.dma_parms = &pdev->dma_parms;
++
+ 	if (!pdev->dev.coherent_dma_mask)
+ 		pdev->dev.coherent_dma_mask = DMA_BIT_MASK(32);
+ 	if (!pdev->dev.dma_mask) {
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+index f184cdca938d..5fcbacddb9b0 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+@@ -3325,15 +3325,12 @@ int amdgpu_device_suspend(struct drm_device *dev, bool fbcon)
+ 		}
+ 	}
+ 
+-	amdgpu_device_set_pg_state(adev, AMD_PG_STATE_UNGATE);
+-	amdgpu_device_set_cg_state(adev, AMD_CG_STATE_UNGATE);
+-
+-	amdgpu_amdkfd_suspend(adev);
+-
+ 	amdgpu_ras_suspend(adev);
+ 
+ 	r = amdgpu_device_ip_suspend_phase1(adev);
+ 
++	amdgpu_amdkfd_suspend(adev);
++
+ 	/* evict vram memory */
+ 	amdgpu_bo_evict_vram(adev);
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+index e310d67c399a..1b0bca9587d0 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+@@ -3034,25 +3034,32 @@ validate_out:
+ 	return out;
+ }
+ 
+-
+-bool dcn20_validate_bandwidth(struct dc *dc, struct dc_state *context,
+-		bool fast_validate)
++/*
++ * This must be noinline to ensure anything that deals with FP registers
++ * is contained within this call; previously our compiling with hard-float
++ * would result in fp instructions being emitted outside of the boundaries
++ * of the DC_FP_START/END macros, which makes sense as the compiler has no
++ * idea about what is wrapped and what is not
++ *
++ * This is largely just a workaround to avoid breakage introduced with 5.6,
++ * ideally all fp-using code should be moved into its own file, only that
++ * should be compiled with hard-float, and all code exported from there
++ * should be strictly wrapped with DC_FP_START/END
++ */
++static noinline bool dcn20_validate_bandwidth_fp(struct dc *dc,
++		struct dc_state *context, bool fast_validate)
+ {
+ 	bool voltage_supported = false;
+ 	bool full_pstate_supported = false;
+ 	bool dummy_pstate_supported = false;
+ 	double p_state_latency_us;
+ 
+-	DC_FP_START();
+ 	p_state_latency_us = context->bw_ctx.dml.soc.dram_clock_change_latency_us;
+ 	context->bw_ctx.dml.soc.disable_dram_clock_change_vactive_support =
+ 		dc->debug.disable_dram_clock_change_vactive_support;
+ 
+ 	if (fast_validate) {
+-		voltage_supported = dcn20_validate_bandwidth_internal(dc, context, true);
+-
+-		DC_FP_END();
+-		return voltage_supported;
++		return dcn20_validate_bandwidth_internal(dc, context, true);
+ 	}
+ 
+ 	// Best case, we support full UCLK switch latency
+@@ -3081,7 +3088,15 @@ bool dcn20_validate_bandwidth(struct dc *dc, struct dc_state *context,
+ 
+ restore_dml_state:
+ 	context->bw_ctx.dml.soc.dram_clock_change_latency_us = p_state_latency_us;
++	return voltage_supported;
++}
+ 
++bool dcn20_validate_bandwidth(struct dc *dc, struct dc_state *context,
++		bool fast_validate)
++{
++	bool voltage_supported = false;
++	DC_FP_START();
++	voltage_supported = dcn20_validate_bandwidth_fp(dc, context, fast_validate);
+ 	DC_FP_END();
+ 	return voltage_supported;
+ }
+diff --git a/drivers/gpu/drm/ingenic/ingenic-drm.c b/drivers/gpu/drm/ingenic/ingenic-drm.c
+index 6d47ef7b148c..bcba2f024842 100644
+--- a/drivers/gpu/drm/ingenic/ingenic-drm.c
++++ b/drivers/gpu/drm/ingenic/ingenic-drm.c
+@@ -843,6 +843,7 @@ static const struct of_device_id ingenic_drm_of_match[] = {
+ 	{ .compatible = "ingenic,jz4770-lcd", .data = &jz4770_soc_info },
+ 	{ /* sentinel */ },
+ };
++MODULE_DEVICE_TABLE(of, ingenic_drm_of_match);
+ 
+ static struct platform_driver ingenic_drm_driver = {
+ 	.driver = {
+diff --git a/drivers/hid/usbhid/hid-core.c b/drivers/hid/usbhid/hid-core.c
+index c7bc9db5b192..17a638f15082 100644
+--- a/drivers/hid/usbhid/hid-core.c
++++ b/drivers/hid/usbhid/hid-core.c
+@@ -682,16 +682,21 @@ static int usbhid_open(struct hid_device *hid)
+ 	struct usbhid_device *usbhid = hid->driver_data;
+ 	int res;
+ 
++	mutex_lock(&usbhid->mutex);
++
+ 	set_bit(HID_OPENED, &usbhid->iofl);
+ 
+-	if (hid->quirks & HID_QUIRK_ALWAYS_POLL)
+-		return 0;
++	if (hid->quirks & HID_QUIRK_ALWAYS_POLL) {
++		res = 0;
++		goto Done;
++	}
+ 
+ 	res = usb_autopm_get_interface(usbhid->intf);
+ 	/* the device must be awake to reliably request remote wakeup */
+ 	if (res < 0) {
+ 		clear_bit(HID_OPENED, &usbhid->iofl);
+-		return -EIO;
++		res = -EIO;
++		goto Done;
+ 	}
+ 
+ 	usbhid->intf->needs_remote_wakeup = 1;
+@@ -725,6 +730,9 @@ static int usbhid_open(struct hid_device *hid)
+ 		msleep(50);
+ 
+ 	clear_bit(HID_RESUME_RUNNING, &usbhid->iofl);
++
++ Done:
++	mutex_unlock(&usbhid->mutex);
+ 	return res;
+ }
+ 
+@@ -732,6 +740,8 @@ static void usbhid_close(struct hid_device *hid)
+ {
+ 	struct usbhid_device *usbhid = hid->driver_data;
+ 
++	mutex_lock(&usbhid->mutex);
++
+ 	/*
+ 	 * Make sure we don't restart data acquisition due to
+ 	 * a resumption we no longer care about by avoiding racing
+@@ -743,12 +753,13 @@ static void usbhid_close(struct hid_device *hid)
+ 		clear_bit(HID_IN_POLLING, &usbhid->iofl);
+ 	spin_unlock_irq(&usbhid->lock);
+ 
+-	if (hid->quirks & HID_QUIRK_ALWAYS_POLL)
+-		return;
++	if (!(hid->quirks & HID_QUIRK_ALWAYS_POLL)) {
++		hid_cancel_delayed_stuff(usbhid);
++		usb_kill_urb(usbhid->urbin);
++		usbhid->intf->needs_remote_wakeup = 0;
++	}
+ 
+-	hid_cancel_delayed_stuff(usbhid);
+-	usb_kill_urb(usbhid->urbin);
+-	usbhid->intf->needs_remote_wakeup = 0;
++	mutex_unlock(&usbhid->mutex);
+ }
+ 
+ /*
+@@ -1057,6 +1068,8 @@ static int usbhid_start(struct hid_device *hid)
+ 	unsigned int n, insize = 0;
+ 	int ret;
+ 
++	mutex_lock(&usbhid->mutex);
++
+ 	clear_bit(HID_DISCONNECTED, &usbhid->iofl);
+ 
+ 	usbhid->bufsize = HID_MIN_BUFFER_SIZE;
+@@ -1177,6 +1190,8 @@ static int usbhid_start(struct hid_device *hid)
+ 		usbhid_set_leds(hid);
+ 		device_set_wakeup_enable(&dev->dev, 1);
+ 	}
++
++	mutex_unlock(&usbhid->mutex);
+ 	return 0;
+ 
+ fail:
+@@ -1187,6 +1202,7 @@ fail:
+ 	usbhid->urbout = NULL;
+ 	usbhid->urbctrl = NULL;
+ 	hid_free_buffers(dev, hid);
++	mutex_unlock(&usbhid->mutex);
+ 	return ret;
+ }
+ 
+@@ -1202,6 +1218,8 @@ static void usbhid_stop(struct hid_device *hid)
+ 		usbhid->intf->needs_remote_wakeup = 0;
+ 	}
+ 
++	mutex_lock(&usbhid->mutex);
++
+ 	clear_bit(HID_STARTED, &usbhid->iofl);
+ 	spin_lock_irq(&usbhid->lock);	/* Sync with error and led handlers */
+ 	set_bit(HID_DISCONNECTED, &usbhid->iofl);
+@@ -1222,6 +1240,8 @@ static void usbhid_stop(struct hid_device *hid)
+ 	usbhid->urbout = NULL;
+ 
+ 	hid_free_buffers(hid_to_usb_dev(hid), hid);
++
++	mutex_unlock(&usbhid->mutex);
+ }
+ 
+ static int usbhid_power(struct hid_device *hid, int lvl)
+@@ -1382,6 +1402,7 @@ static int usbhid_probe(struct usb_interface *intf, const struct usb_device_id *
+ 	INIT_WORK(&usbhid->reset_work, hid_reset);
+ 	timer_setup(&usbhid->io_retry, hid_retry_timeout, 0);
+ 	spin_lock_init(&usbhid->lock);
++	mutex_init(&usbhid->mutex);
+ 
+ 	ret = hid_add_device(hid);
+ 	if (ret) {
+diff --git a/drivers/hid/usbhid/usbhid.h b/drivers/hid/usbhid/usbhid.h
+index 8620408bd7af..75fe85d3d27a 100644
+--- a/drivers/hid/usbhid/usbhid.h
++++ b/drivers/hid/usbhid/usbhid.h
+@@ -80,6 +80,7 @@ struct usbhid_device {
+ 	dma_addr_t outbuf_dma;                                          /* Output buffer dma */
+ 	unsigned long last_out;							/* record of last output for timeouts */
+ 
++	struct mutex mutex;						/* start/stop/open/close */
+ 	spinlock_t lock;						/* fifo spinlock */
+ 	unsigned long iofl;                                             /* I/O flags (CTRL_RUNNING, OUT_RUNNING) */
+ 	struct timer_list io_retry;                                     /* Retry timer */
+diff --git a/drivers/hid/wacom_sys.c b/drivers/hid/wacom_sys.c
+index 5ded94b7bf68..cd71e7133944 100644
+--- a/drivers/hid/wacom_sys.c
++++ b/drivers/hid/wacom_sys.c
+@@ -319,9 +319,11 @@ static void wacom_feature_mapping(struct hid_device *hdev,
+ 			data[0] = field->report->id;
+ 			ret = wacom_get_report(hdev, HID_FEATURE_REPORT,
+ 					       data, n, WAC_CMD_RETRIES);
+-			if (ret == n) {
++			if (ret == n && features->type == HID_GENERIC) {
+ 				ret = hid_report_raw_event(hdev,
+ 					HID_FEATURE_REPORT, data, n, 0);
++			} else if (ret == 2 && features->type != HID_GENERIC) {
++				features->touch_max = data[1];
+ 			} else {
+ 				features->touch_max = 16;
+ 				hid_warn(hdev, "wacom_feature_mapping: "
+diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
+index d99a9d407671..1c96809b51c9 100644
+--- a/drivers/hid/wacom_wac.c
++++ b/drivers/hid/wacom_wac.c
+@@ -1427,11 +1427,13 @@ static void wacom_intuos_pro2_bt_pad(struct wacom_wac *wacom)
+ {
+ 	struct input_dev *pad_input = wacom->pad_input;
+ 	unsigned char *data = wacom->data;
++	int nbuttons = wacom->features.numbered_buttons;
+ 
+-	int buttons = data[282] | ((data[281] & 0x40) << 2);
++	int expresskeys = data[282];
++	int center = (data[281] & 0x40) >> 6;
+ 	int ring = data[285] & 0x7F;
+ 	bool ringstatus = data[285] & 0x80;
+-	bool prox = buttons || ringstatus;
++	bool prox = expresskeys || center || ringstatus;
+ 
+ 	/* Fix touchring data: userspace expects 0 at left and increasing clockwise */
+ 	ring = 71 - ring;
+@@ -1439,7 +1441,8 @@ static void wacom_intuos_pro2_bt_pad(struct wacom_wac *wacom)
+ 	if (ring > 71)
+ 		ring -= 72;
+ 
+-	wacom_report_numbered_buttons(pad_input, 9, buttons);
++	wacom_report_numbered_buttons(pad_input, nbuttons,
++                                      expresskeys | (center << (nbuttons - 1)));
+ 
+ 	input_report_abs(pad_input, ABS_WHEEL, ringstatus ? ring : 0);
+ 
+@@ -2637,9 +2640,25 @@ static void wacom_wac_finger_pre_report(struct hid_device *hdev,
+ 			case HID_DG_TIPSWITCH:
+ 				hid_data->last_slot_field = equivalent_usage;
+ 				break;
++			case HID_DG_CONTACTCOUNT:
++				hid_data->cc_report = report->id;
++				hid_data->cc_index = i;
++				hid_data->cc_value_index = j;
++				break;
+ 			}
+ 		}
+ 	}
++
++	if (hid_data->cc_report != 0 &&
++	    hid_data->cc_index >= 0) {
++		struct hid_field *field = report->field[hid_data->cc_index];
++		int value = field->value[hid_data->cc_value_index];
++		if (value)
++			hid_data->num_expected = value;
++	}
++	else {
++		hid_data->num_expected = wacom_wac->features.touch_max;
++	}
+ }
+ 
+ static void wacom_wac_finger_report(struct hid_device *hdev,
+@@ -2649,7 +2668,6 @@ static void wacom_wac_finger_report(struct hid_device *hdev,
+ 	struct wacom_wac *wacom_wac = &wacom->wacom_wac;
+ 	struct input_dev *input = wacom_wac->touch_input;
+ 	unsigned touch_max = wacom_wac->features.touch_max;
+-	struct hid_data *hid_data = &wacom_wac->hid_data;
+ 
+ 	/* If more packets of data are expected, give us a chance to
+ 	 * process them rather than immediately syncing a partial
+@@ -2663,7 +2681,6 @@ static void wacom_wac_finger_report(struct hid_device *hdev,
+ 
+ 	input_sync(input);
+ 	wacom_wac->hid_data.num_received = 0;
+-	hid_data->num_expected = 0;
+ 
+ 	/* keep touch state for pen event */
+ 	wacom_wac->shared->touch_down = wacom_wac_finger_count_touches(wacom_wac);
+@@ -2738,73 +2755,12 @@ static void wacom_report_events(struct hid_device *hdev,
+ 	}
+ }
+ 
+-static void wacom_set_num_expected(struct hid_device *hdev,
+-				   struct hid_report *report,
+-				   int collection_index,
+-				   struct hid_field *field,
+-				   int field_index)
+-{
+-	struct wacom *wacom = hid_get_drvdata(hdev);
+-	struct wacom_wac *wacom_wac = &wacom->wacom_wac;
+-	struct hid_data *hid_data = &wacom_wac->hid_data;
+-	unsigned int original_collection_level =
+-		hdev->collection[collection_index].level;
+-	bool end_collection = false;
+-	int i;
+-
+-	if (hid_data->num_expected)
+-		return;
+-
+-	// find the contact count value for this segment
+-	for (i = field_index; i < report->maxfield && !end_collection; i++) {
+-		struct hid_field *field = report->field[i];
+-		unsigned int field_level =
+-			hdev->collection[field->usage[0].collection_index].level;
+-		unsigned int j;
+-
+-		if (field_level != original_collection_level)
+-			continue;
+-
+-		for (j = 0; j < field->maxusage; j++) {
+-			struct hid_usage *usage = &field->usage[j];
+-
+-			if (usage->collection_index != collection_index) {
+-				end_collection = true;
+-				break;
+-			}
+-			if (wacom_equivalent_usage(usage->hid) == HID_DG_CONTACTCOUNT) {
+-				hid_data->cc_report = report->id;
+-				hid_data->cc_index = i;
+-				hid_data->cc_value_index = j;
+-
+-				if (hid_data->cc_report != 0 &&
+-				    hid_data->cc_index >= 0) {
+-
+-					struct hid_field *field =
+-						report->field[hid_data->cc_index];
+-					int value =
+-						field->value[hid_data->cc_value_index];
+-
+-					if (value)
+-						hid_data->num_expected = value;
+-				}
+-			}
+-		}
+-	}
+-
+-	if (hid_data->cc_report == 0 || hid_data->cc_index < 0)
+-		hid_data->num_expected = wacom_wac->features.touch_max;
+-}
+-
+ static int wacom_wac_collection(struct hid_device *hdev, struct hid_report *report,
+ 			 int collection_index, struct hid_field *field,
+ 			 int field_index)
+ {
+ 	struct wacom *wacom = hid_get_drvdata(hdev);
+ 
+-	if (WACOM_FINGER_FIELD(field))
+-		wacom_set_num_expected(hdev, report, collection_index, field,
+-				       field_index);
+ 	wacom_report_events(hdev, report, collection_index, field_index);
+ 
+ 	/*
+diff --git a/drivers/iommu/virtio-iommu.c b/drivers/iommu/virtio-iommu.c
+index 5eed75cd121f..e5dcbe80cf85 100644
+--- a/drivers/iommu/virtio-iommu.c
++++ b/drivers/iommu/virtio-iommu.c
+@@ -453,7 +453,7 @@ static int viommu_add_resv_mem(struct viommu_endpoint *vdev,
+ 	if (!region)
+ 		return -ENOMEM;
+ 
+-	list_add(&vdev->resv_regions, &region->list);
++	list_add(&region->list, &vdev->resv_regions);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/misc/mei/hw-me.c b/drivers/misc/mei/hw-me.c
+index 668418d7ea77..f620442addf5 100644
+--- a/drivers/misc/mei/hw-me.c
++++ b/drivers/misc/mei/hw-me.c
+@@ -1465,6 +1465,13 @@ static const struct mei_cfg mei_me_pch12_cfg = {
+ 	MEI_CFG_DMA_128,
+ };
+ 
++/* LBG with quirk for SPS Firmware exclusion */
++static const struct mei_cfg mei_me_pch12_sps_cfg = {
++	MEI_CFG_PCH8_HFS,
++	MEI_CFG_FW_VER_SUPP,
++	MEI_CFG_FW_SPS,
++};
++
+ /* Tiger Lake and newer devices */
+ static const struct mei_cfg mei_me_pch15_cfg = {
+ 	MEI_CFG_PCH8_HFS,
+@@ -1487,6 +1494,7 @@ static const struct mei_cfg *const mei_cfg_list[] = {
+ 	[MEI_ME_PCH8_CFG] = &mei_me_pch8_cfg,
+ 	[MEI_ME_PCH8_SPS_CFG] = &mei_me_pch8_sps_cfg,
+ 	[MEI_ME_PCH12_CFG] = &mei_me_pch12_cfg,
++	[MEI_ME_PCH12_SPS_CFG] = &mei_me_pch12_sps_cfg,
+ 	[MEI_ME_PCH15_CFG] = &mei_me_pch15_cfg,
+ };
+ 
+diff --git a/drivers/misc/mei/hw-me.h b/drivers/misc/mei/hw-me.h
+index 4a8d4dcd5a91..b6b94e211464 100644
+--- a/drivers/misc/mei/hw-me.h
++++ b/drivers/misc/mei/hw-me.h
+@@ -80,6 +80,9 @@ struct mei_me_hw {
+  *                         servers platforms with quirk for
+  *                         SPS firmware exclusion.
+  * @MEI_ME_PCH12_CFG:      Platform Controller Hub Gen12 and newer
++ * @MEI_ME_PCH12_SPS_CFG:  Platform Controller Hub Gen12 and newer
++ *                         servers platforms with quirk for
++ *                         SPS firmware exclusion.
+  * @MEI_ME_PCH15_CFG:      Platform Controller Hub Gen15 and newer
+  * @MEI_ME_NUM_CFG:        Upper Sentinel.
+  */
+@@ -93,6 +96,7 @@ enum mei_cfg_idx {
+ 	MEI_ME_PCH8_CFG,
+ 	MEI_ME_PCH8_SPS_CFG,
+ 	MEI_ME_PCH12_CFG,
++	MEI_ME_PCH12_SPS_CFG,
+ 	MEI_ME_PCH15_CFG,
+ 	MEI_ME_NUM_CFG,
+ };
+diff --git a/drivers/misc/mei/pci-me.c b/drivers/misc/mei/pci-me.c
+index 2eb7b2968e5d..0dd2922aa06d 100644
+--- a/drivers/misc/mei/pci-me.c
++++ b/drivers/misc/mei/pci-me.c
+@@ -79,7 +79,7 @@ static const struct pci_device_id mei_me_pci_tbl[] = {
+ 	{MEI_PCI_DEVICE(MEI_DEV_ID_SPT_2, MEI_ME_PCH8_CFG)},
+ 	{MEI_PCI_DEVICE(MEI_DEV_ID_SPT_H, MEI_ME_PCH8_SPS_CFG)},
+ 	{MEI_PCI_DEVICE(MEI_DEV_ID_SPT_H_2, MEI_ME_PCH8_SPS_CFG)},
+-	{MEI_PCI_DEVICE(MEI_DEV_ID_LBG, MEI_ME_PCH12_CFG)},
++	{MEI_PCI_DEVICE(MEI_DEV_ID_LBG, MEI_ME_PCH12_SPS_CFG)},
+ 
+ 	{MEI_PCI_DEVICE(MEI_DEV_ID_BXT_M, MEI_ME_PCH8_CFG)},
+ 	{MEI_PCI_DEVICE(MEI_DEV_ID_APL_I, MEI_ME_PCH8_CFG)},
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index d28b406a26b1..d0ddd08c4112 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -6662,7 +6662,7 @@ static int bnxt_alloc_ctx_pg_tbls(struct bnxt *bp,
+ 	int rc;
+ 
+ 	if (!mem_size)
+-		return 0;
++		return -EINVAL;
+ 
+ 	ctx_pg->nr_pages = DIV_ROUND_UP(mem_size, BNXT_PAGE_SIZE);
+ 	if (ctx_pg->nr_pages > MAX_CTX_TOTAL_PAGES) {
+@@ -9794,6 +9794,7 @@ static netdev_features_t bnxt_fix_features(struct net_device *dev,
+ 					   netdev_features_t features)
+ {
+ 	struct bnxt *bp = netdev_priv(dev);
++	netdev_features_t vlan_features;
+ 
+ 	if ((features & NETIF_F_NTUPLE) && !bnxt_rfs_capable(bp))
+ 		features &= ~NETIF_F_NTUPLE;
+@@ -9810,12 +9811,14 @@ static netdev_features_t bnxt_fix_features(struct net_device *dev,
+ 	/* Both CTAG and STAG VLAN accelaration on the RX side have to be
+ 	 * turned on or off together.
+ 	 */
+-	if ((features & (NETIF_F_HW_VLAN_CTAG_RX | NETIF_F_HW_VLAN_STAG_RX)) !=
+-	    (NETIF_F_HW_VLAN_CTAG_RX | NETIF_F_HW_VLAN_STAG_RX)) {
++	vlan_features = features & (NETIF_F_HW_VLAN_CTAG_RX |
++				    NETIF_F_HW_VLAN_STAG_RX);
++	if (vlan_features != (NETIF_F_HW_VLAN_CTAG_RX |
++			      NETIF_F_HW_VLAN_STAG_RX)) {
+ 		if (dev->features & NETIF_F_HW_VLAN_CTAG_RX)
+ 			features &= ~(NETIF_F_HW_VLAN_CTAG_RX |
+ 				      NETIF_F_HW_VLAN_STAG_RX);
+-		else
++		else if (vlan_features)
+ 			features |= NETIF_F_HW_VLAN_CTAG_RX |
+ 				    NETIF_F_HW_VLAN_STAG_RX;
+ 	}
+@@ -12173,12 +12176,15 @@ static pci_ers_result_t bnxt_io_slot_reset(struct pci_dev *pdev)
+ 		bnxt_ulp_start(bp, err);
+ 	}
+ 
+-	if (result != PCI_ERS_RESULT_RECOVERED && netif_running(netdev))
+-		dev_close(netdev);
++	if (result != PCI_ERS_RESULT_RECOVERED) {
++		if (netif_running(netdev))
++			dev_close(netdev);
++		pci_disable_device(pdev);
++	}
+ 
+ 	rtnl_unlock();
+ 
+-	return PCI_ERS_RESULT_RECOVERED;
++	return result;
+ }
+ 
+ /**
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+index 63b170658532..ef0268649822 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+@@ -1064,7 +1064,6 @@ struct bnxt_vf_info {
+ #define BNXT_VF_LINK_FORCED	0x4
+ #define BNXT_VF_LINK_UP		0x8
+ #define BNXT_VF_TRUST		0x10
+-	u32	func_flags; /* func cfg flags */
+ 	u32	min_tx_rate;
+ 	u32	max_tx_rate;
+ 	void	*hwrm_cmd_req_addr;
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.h b/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.h
+index 95f893f2a74d..d5c8bd49383a 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.h
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_devlink.h
+@@ -43,7 +43,7 @@ static inline void bnxt_link_bp_to_dl(struct bnxt *bp, struct devlink *dl)
+ #define BNXT_NVM_CFG_VER_BITS		24
+ #define BNXT_NVM_CFG_VER_BYTES		4
+ 
+-#define BNXT_MSIX_VEC_MAX	1280
++#define BNXT_MSIX_VEC_MAX	512
+ #define BNXT_MSIX_VEC_MIN_MAX	128
+ 
+ enum bnxt_nvm_dir_type {
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
+index 2aba1e02a8f4..1259d135c9cc 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
+@@ -85,11 +85,10 @@ int bnxt_set_vf_spoofchk(struct net_device *dev, int vf_id, bool setting)
+ 	if (old_setting == setting)
+ 		return 0;
+ 
+-	func_flags = vf->func_flags;
+ 	if (setting)
+-		func_flags |= FUNC_CFG_REQ_FLAGS_SRC_MAC_ADDR_CHECK_ENABLE;
++		func_flags = FUNC_CFG_REQ_FLAGS_SRC_MAC_ADDR_CHECK_ENABLE;
+ 	else
+-		func_flags |= FUNC_CFG_REQ_FLAGS_SRC_MAC_ADDR_CHECK_DISABLE;
++		func_flags = FUNC_CFG_REQ_FLAGS_SRC_MAC_ADDR_CHECK_DISABLE;
+ 	/*TODO: if the driver supports VLAN filter on guest VLAN,
+ 	 * the spoof check should also include vlan anti-spoofing
+ 	 */
+@@ -98,7 +97,6 @@ int bnxt_set_vf_spoofchk(struct net_device *dev, int vf_id, bool setting)
+ 	req.flags = cpu_to_le32(func_flags);
+ 	rc = hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT);
+ 	if (!rc) {
+-		vf->func_flags = func_flags;
+ 		if (setting)
+ 			vf->flags |= BNXT_VF_SPOOFCHK;
+ 		else
+@@ -230,7 +228,6 @@ int bnxt_set_vf_mac(struct net_device *dev, int vf_id, u8 *mac)
+ 	memcpy(vf->mac_addr, mac, ETH_ALEN);
+ 	bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_FUNC_CFG, -1, -1);
+ 	req.fid = cpu_to_le16(vf->fw_fid);
+-	req.flags = cpu_to_le32(vf->func_flags);
+ 	req.enables = cpu_to_le32(FUNC_CFG_REQ_ENABLES_DFLT_MAC_ADDR);
+ 	memcpy(req.dflt_mac_addr, mac, ETH_ALEN);
+ 	return hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT);
+@@ -268,7 +265,6 @@ int bnxt_set_vf_vlan(struct net_device *dev, int vf_id, u16 vlan_id, u8 qos,
+ 
+ 	bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_FUNC_CFG, -1, -1);
+ 	req.fid = cpu_to_le16(vf->fw_fid);
+-	req.flags = cpu_to_le32(vf->func_flags);
+ 	req.dflt_vlan = cpu_to_le16(vlan_tag);
+ 	req.enables = cpu_to_le32(FUNC_CFG_REQ_ENABLES_DFLT_VLAN);
+ 	rc = hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT);
+@@ -307,7 +303,6 @@ int bnxt_set_vf_bw(struct net_device *dev, int vf_id, int min_tx_rate,
+ 		return 0;
+ 	bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_FUNC_CFG, -1, -1);
+ 	req.fid = cpu_to_le16(vf->fw_fid);
+-	req.flags = cpu_to_le32(vf->func_flags);
+ 	req.enables = cpu_to_le32(FUNC_CFG_REQ_ENABLES_MAX_BW);
+ 	req.max_bw = cpu_to_le32(max_tx_rate);
+ 	req.enables |= cpu_to_le32(FUNC_CFG_REQ_ENABLES_MIN_BW);
+@@ -479,7 +474,6 @@ static void __bnxt_set_vf_params(struct bnxt *bp, int vf_id)
+ 	vf = &bp->pf.vf[vf_id];
+ 	bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_FUNC_CFG, -1, -1);
+ 	req.fid = cpu_to_le16(vf->fw_fid);
+-	req.flags = cpu_to_le32(vf->func_flags);
+ 
+ 	if (is_valid_ether_addr(vf->mac_addr)) {
+ 		req.enables |= cpu_to_le32(FUNC_CFG_REQ_ENABLES_DFLT_MAC_ADDR);
+diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
+index b3a51935e8e0..f42382c2ecd0 100644
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -334,8 +334,10 @@ static int macb_mdio_read(struct mii_bus *bus, int mii_id, int regnum)
+ 	int status;
+ 
+ 	status = pm_runtime_get_sync(&bp->pdev->dev);
+-	if (status < 0)
++	if (status < 0) {
++		pm_runtime_put_noidle(&bp->pdev->dev);
+ 		goto mdio_pm_exit;
++	}
+ 
+ 	status = macb_mdio_wait_for_idle(bp);
+ 	if (status < 0)
+@@ -386,8 +388,10 @@ static int macb_mdio_write(struct mii_bus *bus, int mii_id, int regnum,
+ 	int status;
+ 
+ 	status = pm_runtime_get_sync(&bp->pdev->dev);
+-	if (status < 0)
++	if (status < 0) {
++		pm_runtime_put_noidle(&bp->pdev->dev);
+ 		goto mdio_pm_exit;
++	}
+ 
+ 	status = macb_mdio_wait_for_idle(bp);
+ 	if (status < 0)
+@@ -3803,8 +3807,10 @@ static int at91ether_open(struct net_device *dev)
+ 	int ret;
+ 
+ 	ret = pm_runtime_get_sync(&lp->pdev->dev);
+-	if (ret < 0)
++	if (ret < 0) {
++		pm_runtime_put_noidle(&lp->pdev->dev);
+ 		return ret;
++	}
+ 
+ 	/* Clear internal statistics */
+ 	ctl = macb_readl(lp, NCR);
+@@ -4159,15 +4165,9 @@ static int fu540_c000_clk_init(struct platform_device *pdev, struct clk **pclk,
+ 
+ static int fu540_c000_init(struct platform_device *pdev)
+ {
+-	struct resource *res;
+-
+-	res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
+-	if (!res)
+-		return -ENODEV;
+-
+-	mgmt->reg = ioremap(res->start, resource_size(res));
+-	if (!mgmt->reg)
+-		return -ENOMEM;
++	mgmt->reg = devm_platform_ioremap_resource(pdev, 1);
++	if (IS_ERR(mgmt->reg))
++		return PTR_ERR(mgmt->reg);
+ 
+ 	return macb_init(pdev);
+ }
+diff --git a/drivers/net/ethernet/chelsio/cxgb4/sge.c b/drivers/net/ethernet/chelsio/cxgb4/sge.c
+index cab3d17e0e1a..d6eebd640753 100644
+--- a/drivers/net/ethernet/chelsio/cxgb4/sge.c
++++ b/drivers/net/ethernet/chelsio/cxgb4/sge.c
+@@ -2202,6 +2202,9 @@ static void ethofld_hard_xmit(struct net_device *dev,
+ 	if (unlikely(skip_eotx_wr)) {
+ 		start = (u64 *)wr;
+ 		eosw_txq->state = next_state;
++		eosw_txq->cred -= wrlen16;
++		eosw_txq->ncompl++;
++		eosw_txq->last_compl = 0;
+ 		goto write_wr_headers;
+ 	}
+ 
+@@ -2360,6 +2363,34 @@ netdev_tx_t t4_start_xmit(struct sk_buff *skb, struct net_device *dev)
+ 	return cxgb4_eth_xmit(skb, dev);
+ }
+ 
++static void eosw_txq_flush_pending_skbs(struct sge_eosw_txq *eosw_txq)
++{
++	int pktcount = eosw_txq->pidx - eosw_txq->last_pidx;
++	int pidx = eosw_txq->pidx;
++	struct sk_buff *skb;
++
++	if (!pktcount)
++		return;
++
++	if (pktcount < 0)
++		pktcount += eosw_txq->ndesc;
++
++	while (pktcount--) {
++		pidx--;
++		if (pidx < 0)
++			pidx += eosw_txq->ndesc;
++
++		skb = eosw_txq->desc[pidx].skb;
++		if (skb) {
++			dev_consume_skb_any(skb);
++			eosw_txq->desc[pidx].skb = NULL;
++			eosw_txq->inuse--;
++		}
++	}
++
++	eosw_txq->pidx = eosw_txq->last_pidx + 1;
++}
++
+ /**
+  * cxgb4_ethofld_send_flowc - Send ETHOFLD flowc request to bind eotid to tc.
+  * @dev - netdevice
+@@ -2435,9 +2466,11 @@ int cxgb4_ethofld_send_flowc(struct net_device *dev, u32 eotid, u32 tc)
+ 					    FW_FLOWC_MNEM_EOSTATE_CLOSING :
+ 					    FW_FLOWC_MNEM_EOSTATE_ESTABLISHED);
+ 
+-	eosw_txq->cred -= len16;
+-	eosw_txq->ncompl++;
+-	eosw_txq->last_compl = 0;
++	/* Free up any pending skbs to ensure there's room for
++	 * termination FLOWC.
++	 */
++	if (tc == FW_SCHED_CLS_NONE)
++		eosw_txq_flush_pending_skbs(eosw_txq);
+ 
+ 	ret = eosw_txq_enqueue(eosw_txq, skb);
+ 	if (ret) {
+diff --git a/drivers/net/ethernet/freescale/enetc/enetc_pci_mdio.c b/drivers/net/ethernet/freescale/enetc/enetc_pci_mdio.c
+index ebc635f8a4cc..15f37c5b8dc1 100644
+--- a/drivers/net/ethernet/freescale/enetc/enetc_pci_mdio.c
++++ b/drivers/net/ethernet/freescale/enetc/enetc_pci_mdio.c
+@@ -74,8 +74,8 @@ err_pci_mem_reg:
+ 	pci_disable_device(pdev);
+ err_pci_enable:
+ err_mdiobus_alloc:
+-	iounmap(port_regs);
+ err_hw_alloc:
++	iounmap(port_regs);
+ err_ioremap:
+ 	return err;
+ }
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
+index 35478cba2aa5..4344a59c823f 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
+@@ -1422,6 +1422,9 @@ int mvpp2_ethtool_cls_rule_del(struct mvpp2_port *port,
+ 	struct mvpp2_ethtool_fs *efs;
+ 	int ret;
+ 
++	if (info->fs.location >= MVPP2_N_RFS_ENTRIES_PER_FLOW)
++		return -EINVAL;
++
+ 	efs = port->rfs_rules[info->fs.location];
+ 	if (!efs)
+ 		return -EINVAL;
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+index 72133cbe55d4..eb78a948bee3 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+@@ -4325,6 +4325,8 @@ static int mvpp2_ethtool_get_rxfh_context(struct net_device *dev, u32 *indir,
+ 
+ 	if (!mvpp22_rss_is_supported())
+ 		return -EOPNOTSUPP;
++	if (rss_context >= MVPP22_N_RSS_TABLES)
++		return -EINVAL;
+ 
+ 	if (hfunc)
+ 		*hfunc = ETH_RSS_HASH_CRC32;
+diff --git a/drivers/net/ethernet/mellanox/mlx4/main.c b/drivers/net/ethernet/mellanox/mlx4/main.c
+index 5716c3d2bb86..c72c4e1ea383 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/main.c
++++ b/drivers/net/ethernet/mellanox/mlx4/main.c
+@@ -2550,6 +2550,7 @@ static int mlx4_allocate_default_counters(struct mlx4_dev *dev)
+ 
+ 		if (!err || err == -ENOSPC) {
+ 			priv->def_counter[port] = idx;
++			err = 0;
+ 		} else if (err == -ENOENT) {
+ 			err = 0;
+ 			continue;
+@@ -2600,7 +2601,8 @@ int mlx4_counter_alloc(struct mlx4_dev *dev, u32 *idx, u8 usage)
+ 				   MLX4_CMD_TIME_CLASS_A, MLX4_CMD_WRAPPED);
+ 		if (!err)
+ 			*idx = get_param_l(&out_param);
+-
++		if (WARN_ON(err == -ENOSPC))
++			err = -EINVAL;
+ 		return err;
+ 	}
+ 	return __mlx4_counter_alloc(dev, idx);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+index 34cba97f7bf4..cede5bdfd598 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+@@ -888,7 +888,6 @@ static void cmd_work_handler(struct work_struct *work)
+ 	}
+ 
+ 	cmd->ent_arr[ent->idx] = ent;
+-	set_bit(MLX5_CMD_ENT_STATE_PENDING_COMP, &ent->state);
+ 	lay = get_inst(cmd, ent->idx);
+ 	ent->lay = lay;
+ 	memset(lay, 0, sizeof(*lay));
+@@ -910,6 +909,7 @@ static void cmd_work_handler(struct work_struct *work)
+ 
+ 	if (ent->callback)
+ 		schedule_delayed_work(&ent->cb_timeout_work, cb_timeout);
++	set_bit(MLX5_CMD_ENT_STATE_PENDING_COMP, &ent->state);
+ 
+ 	/* Skip sending command to fw if internal error */
+ 	if (pci_channel_offline(dev->pdev) ||
+@@ -922,6 +922,10 @@ static void cmd_work_handler(struct work_struct *work)
+ 		MLX5_SET(mbox_out, ent->out, syndrome, drv_synd);
+ 
+ 		mlx5_cmd_comp_handler(dev, 1UL << ent->idx, true);
++		/* no doorbell, no need to keep the entry */
++		free_ent(cmd, ent->idx);
++		if (ent->callback)
++			free_cmd(ent);
+ 		return;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+index ffc193c4ad43..2ad0d09cc9bd 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+@@ -1692,19 +1692,14 @@ static void mlx5e_cleanup_rep_rx(struct mlx5e_priv *priv)
+ 
+ static int mlx5e_init_ul_rep_rx(struct mlx5e_priv *priv)
+ {
+-	int err = mlx5e_init_rep_rx(priv);
+-
+-	if (err)
+-		return err;
+-
+ 	mlx5e_create_q_counters(priv);
+-	return 0;
++	return mlx5e_init_rep_rx(priv);
+ }
+ 
+ static void mlx5e_cleanup_ul_rep_rx(struct mlx5e_priv *priv)
+ {
+-	mlx5e_destroy_q_counters(priv);
+ 	mlx5e_cleanup_rep_rx(priv);
++	mlx5e_destroy_q_counters(priv);
+ }
+ 
+ static int mlx5e_init_uplink_rep_tx(struct mlx5e_rep_priv *rpriv)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c
+index 095ec7b1399d..7c77378accf0 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/steering/dr_send.c
+@@ -689,6 +689,12 @@ static void dr_cq_event(struct mlx5_core_cq *mcq,
+ 	pr_info("CQ event %u on CQ #%u\n", event, mcq->cqn);
+ }
+ 
++static void dr_cq_complete(struct mlx5_core_cq *mcq,
++			   struct mlx5_eqe *eqe)
++{
++	pr_err("CQ completion CQ: #%u\n", mcq->cqn);
++}
++
+ static struct mlx5dr_cq *dr_create_cq(struct mlx5_core_dev *mdev,
+ 				      struct mlx5_uars_page *uar,
+ 				      size_t ncqe)
+@@ -750,6 +756,7 @@ static struct mlx5dr_cq *dr_create_cq(struct mlx5_core_dev *mdev,
+ 	mlx5_fill_page_frag_array(&cq->wq_ctrl.buf, pas);
+ 
+ 	cq->mcq.event = dr_cq_event;
++	cq->mcq.comp  = dr_cq_complete;
+ 
+ 	err = mlx5_core_create_cq(mdev, &cq->mcq, in, inlen, out, sizeof(out));
+ 	kvfree(in);
+@@ -761,7 +768,12 @@ static struct mlx5dr_cq *dr_create_cq(struct mlx5_core_dev *mdev,
+ 	cq->mcq.set_ci_db = cq->wq_ctrl.db.db;
+ 	cq->mcq.arm_db = cq->wq_ctrl.db.db + 1;
+ 	*cq->mcq.set_ci_db = 0;
+-	*cq->mcq.arm_db = 0;
++
++	/* set no-zero value, in order to avoid the HW to run db-recovery on
++	 * CQ that used in polling mode.
++	 */
++	*cq->mcq.arm_db = cpu_to_be32(2 << 28);
++
+ 	cq->mcq.vector = 0;
+ 	cq->mcq.irqn = irqn;
+ 	cq->mcq.uar = uar;
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.c
+index e993159e8e4c..295b27112d36 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_acl_tcam.c
+@@ -986,8 +986,9 @@ mlxsw_sp_acl_tcam_vchunk_create(struct mlxsw_sp *mlxsw_sp,
+ 				unsigned int priority,
+ 				struct mlxsw_afk_element_usage *elusage)
+ {
++	struct mlxsw_sp_acl_tcam_vchunk *vchunk, *vchunk2;
+ 	struct mlxsw_sp_acl_tcam_vregion *vregion;
+-	struct mlxsw_sp_acl_tcam_vchunk *vchunk;
++	struct list_head *pos;
+ 	int err;
+ 
+ 	if (priority == MLXSW_SP_ACL_TCAM_CATCHALL_PRIO)
+@@ -1025,7 +1026,14 @@ mlxsw_sp_acl_tcam_vchunk_create(struct mlxsw_sp *mlxsw_sp,
+ 	}
+ 
+ 	mlxsw_sp_acl_tcam_rehash_ctx_vregion_changed(vregion);
+-	list_add_tail(&vchunk->list, &vregion->vchunk_list);
++
++	/* Position the vchunk inside the list according to priority */
++	list_for_each(pos, &vregion->vchunk_list) {
++		vchunk2 = list_entry(pos, typeof(*vchunk2), list);
++		if (vchunk2->priority > priority)
++			break;
++	}
++	list_add_tail(&vchunk->list, pos);
+ 	mutex_unlock(&vregion->lock);
+ 
+ 	return vchunk;
+diff --git a/drivers/net/ethernet/netronome/nfp/abm/main.c b/drivers/net/ethernet/netronome/nfp/abm/main.c
+index 9183b3e85d21..354efffac0f9 100644
+--- a/drivers/net/ethernet/netronome/nfp/abm/main.c
++++ b/drivers/net/ethernet/netronome/nfp/abm/main.c
+@@ -283,6 +283,7 @@ nfp_abm_vnic_set_mac(struct nfp_pf *pf, struct nfp_abm *abm, struct nfp_net *nn,
+ 	if (!nfp_nsp_has_hwinfo_lookup(nsp)) {
+ 		nfp_warn(pf->cpp, "NSP doesn't support PF MAC generation\n");
+ 		eth_hw_addr_random(nn->dp.netdev);
++		nfp_nsp_close(nsp);
+ 		return;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/toshiba/tc35815.c b/drivers/net/ethernet/toshiba/tc35815.c
+index 3fd43d30b20d..a1066fbb93b5 100644
+--- a/drivers/net/ethernet/toshiba/tc35815.c
++++ b/drivers/net/ethernet/toshiba/tc35815.c
+@@ -643,7 +643,7 @@ static int tc_mii_probe(struct net_device *dev)
+ 		linkmode_set_bit(ETHTOOL_LINK_MODE_10baseT_Half_BIT, mask);
+ 		linkmode_set_bit(ETHTOOL_LINK_MODE_100baseT_Half_BIT, mask);
+ 	}
+-	linkmode_and(phydev->supported, phydev->supported, mask);
++	linkmode_andnot(phydev->supported, phydev->supported, mask);
+ 	linkmode_copy(phydev->advertising, phydev->supported);
+ 
+ 	lp->link = 0;
+diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c
+index 35aa7b0a2aeb..11028ef8be4e 100644
+--- a/drivers/net/macsec.c
++++ b/drivers/net/macsec.c
+@@ -1226,7 +1226,8 @@ static struct crypto_aead *macsec_alloc_tfm(char *key, int key_len, int icv_len)
+ 	struct crypto_aead *tfm;
+ 	int ret;
+ 
+-	tfm = crypto_alloc_aead("gcm(aes)", 0, 0);
++	/* Pick a sync gcm(aes) cipher to ensure order is preserved. */
++	tfm = crypto_alloc_aead("gcm(aes)", 0, CRYPTO_ALG_ASYNC);
+ 
+ 	if (IS_ERR(tfm))
+ 		return tfm;
+diff --git a/drivers/net/phy/dp83640.c b/drivers/net/phy/dp83640.c
+index ac72a324fcd1..b1d771325c57 100644
+--- a/drivers/net/phy/dp83640.c
++++ b/drivers/net/phy/dp83640.c
+@@ -1120,7 +1120,7 @@ static struct dp83640_clock *dp83640_clock_get_bus(struct mii_bus *bus)
+ 		goto out;
+ 	}
+ 	dp83640_clock_init(clock, bus);
+-	list_add_tail(&phyter_clocks, &clock->list);
++	list_add_tail(&clock->list, &phyter_clocks);
+ out:
+ 	mutex_unlock(&phyter_clocks_lock);
+ 
+diff --git a/drivers/net/phy/marvell10g.c b/drivers/net/phy/marvell10g.c
+index 64c9f3bba2cd..e2658dace15d 100644
+--- a/drivers/net/phy/marvell10g.c
++++ b/drivers/net/phy/marvell10g.c
+@@ -44,6 +44,9 @@ enum {
+ 	MV_PCS_PAIRSWAP_AB	= 0x0002,
+ 	MV_PCS_PAIRSWAP_NONE	= 0x0003,
+ 
++	/* Temperature read register (88E2110 only) */
++	MV_PCS_TEMP		= 0x8042,
++
+ 	/* These registers appear at 0x800X and 0xa00X - the 0xa00X control
+ 	 * registers appear to set themselves to the 0x800X when AN is
+ 	 * restarted, but status registers appear readable from either.
+@@ -54,6 +57,7 @@ enum {
+ 	/* Vendor2 MMD registers */
+ 	MV_V2_PORT_CTRL		= 0xf001,
+ 	MV_V2_PORT_CTRL_PWRDOWN = 0x0800,
++	/* Temperature control/read registers (88X3310 only) */
+ 	MV_V2_TEMP_CTRL		= 0xf08a,
+ 	MV_V2_TEMP_CTRL_MASK	= 0xc000,
+ 	MV_V2_TEMP_CTRL_SAMPLE	= 0x0000,
+@@ -79,6 +83,24 @@ static umode_t mv3310_hwmon_is_visible(const void *data,
+ 	return 0;
+ }
+ 
++static int mv3310_hwmon_read_temp_reg(struct phy_device *phydev)
++{
++	return phy_read_mmd(phydev, MDIO_MMD_VEND2, MV_V2_TEMP);
++}
++
++static int mv2110_hwmon_read_temp_reg(struct phy_device *phydev)
++{
++	return phy_read_mmd(phydev, MDIO_MMD_PCS, MV_PCS_TEMP);
++}
++
++static int mv10g_hwmon_read_temp_reg(struct phy_device *phydev)
++{
++	if (phydev->drv->phy_id == MARVELL_PHY_ID_88X3310)
++		return mv3310_hwmon_read_temp_reg(phydev);
++	else /* MARVELL_PHY_ID_88E2110 */
++		return mv2110_hwmon_read_temp_reg(phydev);
++}
++
+ static int mv3310_hwmon_read(struct device *dev, enum hwmon_sensor_types type,
+ 			     u32 attr, int channel, long *value)
+ {
+@@ -91,7 +113,7 @@ static int mv3310_hwmon_read(struct device *dev, enum hwmon_sensor_types type,
+ 	}
+ 
+ 	if (type == hwmon_temp && attr == hwmon_temp_input) {
+-		temp = phy_read_mmd(phydev, MDIO_MMD_VEND2, MV_V2_TEMP);
++		temp = mv10g_hwmon_read_temp_reg(phydev);
+ 		if (temp < 0)
+ 			return temp;
+ 
+@@ -144,6 +166,9 @@ static int mv3310_hwmon_config(struct phy_device *phydev, bool enable)
+ 	u16 val;
+ 	int ret;
+ 
++	if (phydev->drv->phy_id != MARVELL_PHY_ID_88X3310)
++		return 0;
++
+ 	ret = phy_write_mmd(phydev, MDIO_MMD_VEND2, MV_V2_TEMP,
+ 			    MV_V2_TEMP_UNKNOWN);
+ 	if (ret < 0)
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index 6c738a271257..4bb8552a00d3 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -1359,6 +1359,7 @@ static const struct usb_device_id products[] = {
+ 	{QMI_FIXED_INTF(0x413c, 0x81b3, 8)},	/* Dell Wireless 5809e Gobi(TM) 4G LTE Mobile Broadband Card (rev3) */
+ 	{QMI_FIXED_INTF(0x413c, 0x81b6, 8)},	/* Dell Wireless 5811e */
+ 	{QMI_FIXED_INTF(0x413c, 0x81b6, 10)},	/* Dell Wireless 5811e */
++	{QMI_FIXED_INTF(0x413c, 0x81cc, 8)},	/* Dell Wireless 5816e */
+ 	{QMI_FIXED_INTF(0x413c, 0x81d7, 0)},	/* Dell Wireless 5821e */
+ 	{QMI_FIXED_INTF(0x413c, 0x81d7, 1)},	/* Dell Wireless 5821e preproduction config */
+ 	{QMI_FIXED_INTF(0x413c, 0x81e0, 0)},	/* Dell Wireless 5821e with eSIM support*/
+diff --git a/drivers/net/wireguard/queueing.c b/drivers/net/wireguard/queueing.c
+index 5c964fcb994e..71b8e80b58e1 100644
+--- a/drivers/net/wireguard/queueing.c
++++ b/drivers/net/wireguard/queueing.c
+@@ -35,8 +35,10 @@ int wg_packet_queue_init(struct crypt_queue *queue, work_func_t function,
+ 		if (multicore) {
+ 			queue->worker = wg_packet_percpu_multicore_worker_alloc(
+ 				function, queue);
+-			if (!queue->worker)
++			if (!queue->worker) {
++				ptr_ring_cleanup(&queue->ring, NULL);
+ 				return -ENOMEM;
++			}
+ 		} else {
+ 			INIT_WORK(&queue->work, function);
+ 		}
+diff --git a/drivers/net/wireguard/receive.c b/drivers/net/wireguard/receive.c
+index da3b782ab7d3..2566e13a292d 100644
+--- a/drivers/net/wireguard/receive.c
++++ b/drivers/net/wireguard/receive.c
+@@ -393,13 +393,11 @@ static void wg_packet_consume_data_done(struct wg_peer *peer,
+ 		len = ntohs(ip_hdr(skb)->tot_len);
+ 		if (unlikely(len < sizeof(struct iphdr)))
+ 			goto dishonest_packet_size;
+-		if (INET_ECN_is_ce(PACKET_CB(skb)->ds))
+-			IP_ECN_set_ce(ip_hdr(skb));
++		INET_ECN_decapsulate(skb, PACKET_CB(skb)->ds, ip_hdr(skb)->tos);
+ 	} else if (skb->protocol == htons(ETH_P_IPV6)) {
+ 		len = ntohs(ipv6_hdr(skb)->payload_len) +
+ 		      sizeof(struct ipv6hdr);
+-		if (INET_ECN_is_ce(PACKET_CB(skb)->ds))
+-			IP6_ECN_set_ce(skb, ipv6_hdr(skb));
++		INET_ECN_decapsulate(skb, PACKET_CB(skb)->ds, ipv6_get_dsfield(ipv6_hdr(skb)));
+ 	} else {
+ 		goto dishonest_packet_type;
+ 	}
+@@ -518,6 +516,8 @@ void wg_packet_decrypt_worker(struct work_struct *work)
+ 				&PACKET_CB(skb)->keypair->receiving)) ?
+ 				PACKET_STATE_CRYPTED : PACKET_STATE_DEAD;
+ 		wg_queue_enqueue_per_peer_napi(skb, state);
++		if (need_resched())
++			cond_resched();
+ 	}
+ }
+ 
+diff --git a/drivers/net/wireguard/send.c b/drivers/net/wireguard/send.c
+index 7348c10cbae3..e8a7d0a0cb88 100644
+--- a/drivers/net/wireguard/send.c
++++ b/drivers/net/wireguard/send.c
+@@ -281,6 +281,8 @@ void wg_packet_tx_worker(struct work_struct *work)
+ 
+ 		wg_noise_keypair_put(keypair, false);
+ 		wg_peer_put(peer);
++		if (need_resched())
++			cond_resched();
+ 	}
+ }
+ 
+@@ -305,6 +307,8 @@ void wg_packet_encrypt_worker(struct work_struct *work)
+ 		wg_queue_enqueue_per_peer(&PACKET_PEER(first)->tx_queue, first,
+ 					  state);
+ 
++		if (need_resched())
++			cond_resched();
+ 	}
+ }
+ 
+diff --git a/drivers/net/wireguard/socket.c b/drivers/net/wireguard/socket.c
+index b0d6541582d3..f9018027fc13 100644
+--- a/drivers/net/wireguard/socket.c
++++ b/drivers/net/wireguard/socket.c
+@@ -76,12 +76,6 @@ static int send4(struct wg_device *wg, struct sk_buff *skb,
+ 			net_dbg_ratelimited("%s: No route to %pISpfsc, error %d\n",
+ 					    wg->dev->name, &endpoint->addr, ret);
+ 			goto err;
+-		} else if (unlikely(rt->dst.dev == skb->dev)) {
+-			ip_rt_put(rt);
+-			ret = -ELOOP;
+-			net_dbg_ratelimited("%s: Avoiding routing loop to %pISpfsc\n",
+-					    wg->dev->name, &endpoint->addr);
+-			goto err;
+ 		}
+ 		if (cache)
+ 			dst_cache_set_ip4(cache, &rt->dst, fl.saddr);
+@@ -149,12 +143,6 @@ static int send6(struct wg_device *wg, struct sk_buff *skb,
+ 			net_dbg_ratelimited("%s: No route to %pISpfsc, error %d\n",
+ 					    wg->dev->name, &endpoint->addr, ret);
+ 			goto err;
+-		} else if (unlikely(dst->dev == skb->dev)) {
+-			dst_release(dst);
+-			ret = -ELOOP;
+-			net_dbg_ratelimited("%s: Avoiding routing loop to %pISpfsc\n",
+-					    wg->dev->name, &endpoint->addr);
+-			goto err;
+ 		}
+ 		if (cache)
+ 			dst_cache_set_ip6(cache, dst, &fl.saddr);
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index fb4c35a43065..84f20369d846 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -1075,8 +1075,17 @@ static int nvme_identify_ns_descs(struct nvme_ctrl *ctrl, unsigned nsid,
+ 
+ 	status = nvme_submit_sync_cmd(ctrl->admin_q, &c, data,
+ 				      NVME_IDENTIFY_DATA_SIZE);
+-	if (status)
++	if (status) {
++		dev_warn(ctrl->device,
++			"Identify Descriptors failed (%d)\n", status);
++		 /*
++		  * Don't treat an error as fatal, as we potentially already
++		  * have a NGUID or EUI-64.
++		  */
++		if (status > 0 && !(status & NVME_SC_DNR))
++			status = 0;
+ 		goto free_data;
++	}
+ 
+ 	for (pos = 0; pos < NVME_IDENTIFY_DATA_SIZE; pos += len) {
+ 		struct nvme_ns_id_desc *cur = data + pos;
+@@ -1734,26 +1743,15 @@ static void nvme_config_write_zeroes(struct gendisk *disk, struct nvme_ns *ns)
+ static int nvme_report_ns_ids(struct nvme_ctrl *ctrl, unsigned int nsid,
+ 		struct nvme_id_ns *id, struct nvme_ns_ids *ids)
+ {
+-	int ret = 0;
+-
+ 	memset(ids, 0, sizeof(*ids));
+ 
+ 	if (ctrl->vs >= NVME_VS(1, 1, 0))
+ 		memcpy(ids->eui64, id->eui64, sizeof(id->eui64));
+ 	if (ctrl->vs >= NVME_VS(1, 2, 0))
+ 		memcpy(ids->nguid, id->nguid, sizeof(id->nguid));
+-	if (ctrl->vs >= NVME_VS(1, 3, 0)) {
+-		 /* Don't treat error as fatal we potentially
+-		  * already have a NGUID or EUI-64
+-		  */
+-		ret = nvme_identify_ns_descs(ctrl, nsid, ids);
+-		if (ret)
+-			dev_warn(ctrl->device,
+-				 "Identify Descriptors failed (%d)\n", ret);
+-		if (ret > 0)
+-			ret = 0;
+-	}
+-	return ret;
++	if (ctrl->vs >= NVME_VS(1, 3, 0))
++		return nvme_identify_ns_descs(ctrl, nsid, ids);
++	return 0;
+ }
+ 
+ static bool nvme_ns_ids_valid(struct nvme_ns_ids *ids)
+diff --git a/drivers/staging/gasket/gasket_core.c b/drivers/staging/gasket/gasket_core.c
+index be6b50f454b4..d3f255c740e9 100644
+--- a/drivers/staging/gasket/gasket_core.c
++++ b/drivers/staging/gasket/gasket_core.c
+@@ -926,6 +926,10 @@ do_map_region(const struct gasket_dev *gasket_dev, struct vm_area_struct *vma,
+ 		gasket_get_bar_index(gasket_dev,
+ 				     (vma->vm_pgoff << PAGE_SHIFT) +
+ 				     driver_desc->legacy_mmap_address_offset);
++
++	if (bar_index < 0)
++		return DO_MAP_REGION_INVALID;
++
+ 	phys_base = gasket_dev->bar_data[bar_index].phys_base + phys_offset;
+ 	while (mapped_bytes < map_length) {
+ 		/*
+diff --git a/drivers/thunderbolt/usb4.c b/drivers/thunderbolt/usb4.c
+index b341fc60c4ba..114fbe51527c 100644
+--- a/drivers/thunderbolt/usb4.c
++++ b/drivers/thunderbolt/usb4.c
+@@ -182,6 +182,9 @@ static int usb4_switch_op(struct tb_switch *sw, u16 opcode, u8 *status)
+ 		return ret;
+ 
+ 	ret = tb_sw_read(sw, &val, TB_CFG_SWITCH, ROUTER_CS_26, 1);
++	if (ret)
++		return ret;
++
+ 	if (val & ROUTER_CS_26_ONS)
+ 		return -EOPNOTSUPP;
+ 
+diff --git a/drivers/tty/serial/xilinx_uartps.c b/drivers/tty/serial/xilinx_uartps.c
+index 7a9b360b0438..1d8b6993a435 100644
+--- a/drivers/tty/serial/xilinx_uartps.c
++++ b/drivers/tty/serial/xilinx_uartps.c
+@@ -1471,6 +1471,7 @@ static int cdns_uart_probe(struct platform_device *pdev)
+ 		cdns_uart_uart_driver.nr = CDNS_UART_NR_PORTS;
+ #ifdef CONFIG_SERIAL_XILINX_PS_UART_CONSOLE
+ 		cdns_uart_uart_driver.cons = &cdns_uart_console;
++		cdns_uart_console.index = id;
+ #endif
+ 
+ 		rc = uart_register_driver(&cdns_uart_uart_driver);
+diff --git a/drivers/tty/vt/vt.c b/drivers/tty/vt/vt.c
+index cc1a04191365..699d8b56cbe7 100644
+--- a/drivers/tty/vt/vt.c
++++ b/drivers/tty/vt/vt.c
+@@ -365,9 +365,14 @@ static struct uni_screen *vc_uniscr_alloc(unsigned int cols, unsigned int rows)
+ 	return uniscr;
+ }
+ 
++static void vc_uniscr_free(struct uni_screen *uniscr)
++{
++	vfree(uniscr);
++}
++
+ static void vc_uniscr_set(struct vc_data *vc, struct uni_screen *new_uniscr)
+ {
+-	vfree(vc->vc_uni_screen);
++	vc_uniscr_free(vc->vc_uni_screen);
+ 	vc->vc_uni_screen = new_uniscr;
+ }
+ 
+@@ -1230,7 +1235,7 @@ static int vc_do_resize(struct tty_struct *tty, struct vc_data *vc,
+ 	err = resize_screen(vc, new_cols, new_rows, user);
+ 	if (err) {
+ 		kfree(newscreen);
+-		kfree(new_uniscr);
++		vc_uniscr_free(new_uniscr);
+ 		return err;
+ 	}
+ 
+diff --git a/drivers/usb/chipidea/ci_hdrc_msm.c b/drivers/usb/chipidea/ci_hdrc_msm.c
+index af648ba6544d..46105457e1ca 100644
+--- a/drivers/usb/chipidea/ci_hdrc_msm.c
++++ b/drivers/usb/chipidea/ci_hdrc_msm.c
+@@ -114,7 +114,7 @@ static int ci_hdrc_msm_notify_event(struct ci_hdrc *ci, unsigned event)
+ 			hw_write_id_reg(ci, HS_PHY_GENCONFIG_2,
+ 					HS_PHY_ULPI_TX_PKT_EN_CLR_FIX, 0);
+ 
+-		if (!IS_ERR(ci->platdata->vbus_extcon.edev)) {
++		if (!IS_ERR(ci->platdata->vbus_extcon.edev) || ci->role_switch) {
+ 			hw_write_id_reg(ci, HS_PHY_GENCONFIG_2,
+ 					HS_PHY_SESS_VLD_CTRL_EN,
+ 					HS_PHY_SESS_VLD_CTRL_EN);
+diff --git a/drivers/usb/serial/garmin_gps.c b/drivers/usb/serial/garmin_gps.c
+index ffd984142171..d63072fee099 100644
+--- a/drivers/usb/serial/garmin_gps.c
++++ b/drivers/usb/serial/garmin_gps.c
+@@ -1138,8 +1138,8 @@ static void garmin_read_process(struct garmin_data *garmin_data_p,
+ 		   send it directly to the tty port */
+ 		if (garmin_data_p->flags & FLAGS_QUEUING) {
+ 			pkt_add(garmin_data_p, data, data_length);
+-		} else if (bulk_data ||
+-			   getLayerId(data) == GARMIN_LAYERID_APPL) {
++		} else if (bulk_data || (data_length >= sizeof(u32) &&
++				getLayerId(data) == GARMIN_LAYERID_APPL)) {
+ 
+ 			spin_lock_irqsave(&garmin_data_p->lock, flags);
+ 			garmin_data_p->flags |= APP_RESP_SEEN;
+diff --git a/drivers/usb/serial/qcserial.c b/drivers/usb/serial/qcserial.c
+index 613f91add03d..ce0401d3137f 100644
+--- a/drivers/usb/serial/qcserial.c
++++ b/drivers/usb/serial/qcserial.c
+@@ -173,6 +173,7 @@ static const struct usb_device_id id_table[] = {
+ 	{DEVICE_SWI(0x413c, 0x81b3)},	/* Dell Wireless 5809e Gobi(TM) 4G LTE Mobile Broadband Card (rev3) */
+ 	{DEVICE_SWI(0x413c, 0x81b5)},	/* Dell Wireless 5811e QDL */
+ 	{DEVICE_SWI(0x413c, 0x81b6)},	/* Dell Wireless 5811e QDL */
++	{DEVICE_SWI(0x413c, 0x81cc)},	/* Dell Wireless 5816e */
+ 	{DEVICE_SWI(0x413c, 0x81cf)},   /* Dell Wireless 5819 */
+ 	{DEVICE_SWI(0x413c, 0x81d0)},   /* Dell Wireless 5819 */
+ 	{DEVICE_SWI(0x413c, 0x81d1)},   /* Dell Wireless 5818 */
+diff --git a/drivers/usb/storage/unusual_uas.h b/drivers/usb/storage/unusual_uas.h
+index 1b23741036ee..37157ed9a881 100644
+--- a/drivers/usb/storage/unusual_uas.h
++++ b/drivers/usb/storage/unusual_uas.h
+@@ -28,6 +28,13 @@
+  * and don't forget to CC: the USB development list <linux-usb@vger.kernel.org>
+  */
+ 
++/* Reported-by: Julian Groß <julian.g@posteo.de> */
++UNUSUAL_DEV(0x059f, 0x105f, 0x0000, 0x9999,
++		"LaCie",
++		"2Big Quadra USB3",
++		USB_SC_DEVICE, USB_PR_DEVICE, NULL,
++		US_FL_NO_REPORT_OPCODES),
++
+ /*
+  * Apricorn USB3 dongle sometimes returns "USBSUSBSUSBS" in response to SCSI
+  * commands in UAS mode.  Observed with the 1.28 firmware; are there others?
+diff --git a/fs/ceph/mds_client.c b/fs/ceph/mds_client.c
+index bbbbddf71326..da7d5c9e3133 100644
+--- a/fs/ceph/mds_client.c
++++ b/fs/ceph/mds_client.c
+@@ -3116,8 +3116,7 @@ static void handle_session(struct ceph_mds_session *session,
+ 	void *end = p + msg->front.iov_len;
+ 	struct ceph_mds_session_head *h;
+ 	u32 op;
+-	u64 seq;
+-	unsigned long features = 0;
++	u64 seq, features = 0;
+ 	int wake = 0;
+ 	bool blacklisted = false;
+ 
+@@ -3136,9 +3135,8 @@ static void handle_session(struct ceph_mds_session *session,
+ 			goto bad;
+ 		/* version >= 3, feature bits */
+ 		ceph_decode_32_safe(&p, end, len, bad);
+-		ceph_decode_need(&p, end, len, bad);
+-		memcpy(&features, p, min_t(size_t, len, sizeof(features)));
+-		p += len;
++		ceph_decode_64_safe(&p, end, features, bad);
++		p += len - sizeof(features);
+ 	}
+ 
+ 	mutex_lock(&mdsc->mutex);
+diff --git a/fs/ceph/quota.c b/fs/ceph/quota.c
+index de56dee60540..19507e2fdb57 100644
+--- a/fs/ceph/quota.c
++++ b/fs/ceph/quota.c
+@@ -159,8 +159,8 @@ static struct inode *lookup_quotarealm_inode(struct ceph_mds_client *mdsc,
+ 	}
+ 
+ 	if (IS_ERR(in)) {
+-		pr_warn("Can't lookup inode %llx (err: %ld)\n",
+-			realm->ino, PTR_ERR(in));
++		dout("Can't lookup inode %llx (err: %ld)\n",
++		     realm->ino, PTR_ERR(in));
+ 		qri->timeout = jiffies + msecs_to_jiffies(60 * 1000); /* XXX */
+ 	} else {
+ 		qri->timeout = 0;
+diff --git a/fs/coredump.c b/fs/coredump.c
+index 408418e6aa13..478a0d810136 100644
+--- a/fs/coredump.c
++++ b/fs/coredump.c
+@@ -788,6 +788,14 @@ void do_coredump(const kernel_siginfo_t *siginfo)
+ 	if (displaced)
+ 		put_files_struct(displaced);
+ 	if (!dump_interrupted()) {
++		/*
++		 * umh disabled with CONFIG_STATIC_USERMODEHELPER_PATH="" would
++		 * have this set to NULL.
++		 */
++		if (!cprm.file) {
++			pr_info("Core dump to |%s disabled\n", cn.corename);
++			goto close_fail;
++		}
+ 		file_start_write(cprm.file);
+ 		core_dumped = binfmt->core_dump(&cprm);
+ 		file_end_write(cprm.file);
+diff --git a/fs/eventpoll.c b/fs/eventpoll.c
+index eee3c92a9ebf..b0a097274cfe 100644
+--- a/fs/eventpoll.c
++++ b/fs/eventpoll.c
+@@ -1149,6 +1149,10 @@ static inline bool chain_epi_lockless(struct epitem *epi)
+ {
+ 	struct eventpoll *ep = epi->ep;
+ 
++	/* Fast preliminary check */
++	if (epi->next != EP_UNACTIVE_PTR)
++		return false;
++
+ 	/* Check that the same epi has not been just chained from another CPU */
+ 	if (cmpxchg(&epi->next, EP_UNACTIVE_PTR, NULL) != EP_UNACTIVE_PTR)
+ 		return false;
+@@ -1215,16 +1219,12 @@ static int ep_poll_callback(wait_queue_entry_t *wait, unsigned mode, int sync, v
+ 	 * chained in ep->ovflist and requeued later on.
+ 	 */
+ 	if (READ_ONCE(ep->ovflist) != EP_UNACTIVE_PTR) {
+-		if (epi->next == EP_UNACTIVE_PTR &&
+-		    chain_epi_lockless(epi))
++		if (chain_epi_lockless(epi))
++			ep_pm_stay_awake_rcu(epi);
++	} else if (!ep_is_linked(epi)) {
++		/* In the usual case, add event to ready list. */
++		if (list_add_tail_lockless(&epi->rdllink, &ep->rdllist))
+ 			ep_pm_stay_awake_rcu(epi);
+-		goto out_unlock;
+-	}
+-
+-	/* If this file is already in the ready list we exit soon */
+-	if (!ep_is_linked(epi) &&
+-	    list_add_tail_lockless(&epi->rdllink, &ep->rdllist)) {
+-		ep_pm_stay_awake_rcu(epi);
+ 	}
+ 
+ 	/*
+@@ -1800,7 +1800,6 @@ static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events,
+ {
+ 	int res = 0, eavail, timed_out = 0;
+ 	u64 slack = 0;
+-	bool waiter = false;
+ 	wait_queue_entry_t wait;
+ 	ktime_t expires, *to = NULL;
+ 
+@@ -1845,21 +1844,23 @@ fetch_events:
+ 	 */
+ 	ep_reset_busy_poll_napi_id(ep);
+ 
+-	/*
+-	 * We don't have any available event to return to the caller.  We need
+-	 * to sleep here, and we will be woken by ep_poll_callback() when events
+-	 * become available.
+-	 */
+-	if (!waiter) {
+-		waiter = true;
+-		init_waitqueue_entry(&wait, current);
+-
++	do {
++		/*
++		 * Internally init_wait() uses autoremove_wake_function(),
++		 * thus wait entry is removed from the wait queue on each
++		 * wakeup. Why it is important? In case of several waiters
++		 * each new wakeup will hit the next waiter, giving it the
++		 * chance to harvest new event. Otherwise wakeup can be
++		 * lost. This is also good performance-wise, because on
++		 * normal wakeup path no need to call __remove_wait_queue()
++		 * explicitly, thus ep->lock is not taken, which halts the
++		 * event delivery.
++		 */
++		init_wait(&wait);
+ 		write_lock_irq(&ep->lock);
+ 		__add_wait_queue_exclusive(&ep->wq, &wait);
+ 		write_unlock_irq(&ep->lock);
+-	}
+ 
+-	for (;;) {
+ 		/*
+ 		 * We don't want to sleep if the ep_poll_callback() sends us
+ 		 * a wakeup in between. That's why we set the task state
+@@ -1889,10 +1890,20 @@ fetch_events:
+ 			timed_out = 1;
+ 			break;
+ 		}
+-	}
++
++		/* We were woken up, thus go and try to harvest some events */
++		eavail = 1;
++
++	} while (0);
+ 
+ 	__set_current_state(TASK_RUNNING);
+ 
++	if (!list_empty_careful(&wait.entry)) {
++		write_lock_irq(&ep->lock);
++		__remove_wait_queue(&ep->wq, &wait);
++		write_unlock_irq(&ep->lock);
++	}
++
+ send_events:
+ 	/*
+ 	 * Try to transfer events to user space. In case we get 0 events and
+@@ -1903,12 +1914,6 @@ send_events:
+ 	    !(res = ep_send_events(ep, events, maxevents)) && !timed_out)
+ 		goto fetch_events;
+ 
+-	if (waiter) {
+-		write_lock_irq(&ep->lock);
+-		__remove_wait_queue(&ep->wq, &wait);
+-		write_unlock_irq(&ep->lock);
+-	}
+-
+ 	return res;
+ }
+ 
+diff --git a/fs/ext4/ext4_jbd2.h b/fs/ext4/ext4_jbd2.h
+index 7ea4f6fa173b..4b9002f0e84c 100644
+--- a/fs/ext4/ext4_jbd2.h
++++ b/fs/ext4/ext4_jbd2.h
+@@ -512,6 +512,9 @@ static inline int ext4_should_dioread_nolock(struct inode *inode)
+ 		return 0;
+ 	if (ext4_should_journal_data(inode))
+ 		return 0;
++	/* temporary fix to prevent generic/422 test failures */
++	if (!test_opt(inode->i_sb, DELALLOC))
++		return 0;
+ 	return 1;
+ }
+ 
+diff --git a/fs/ext4/super.c b/fs/ext4/super.c
+index 446158ab507d..70796de7c468 100644
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -2181,6 +2181,14 @@ static int parse_options(char *options, struct super_block *sb,
+ 		}
+ 	}
+ #endif
++	if (test_opt(sb, DIOREAD_NOLOCK)) {
++		int blocksize =
++			BLOCK_SIZE << le32_to_cpu(sbi->s_es->s_log_block_size);
++		if (blocksize < PAGE_SIZE)
++			ext4_msg(sb, KERN_WARNING, "Warning: mounting with an "
++				 "experimental mount option 'dioread_nolock' "
++				 "for blocksize < PAGE_SIZE");
++	}
+ 	return 1;
+ }
+ 
+@@ -3787,7 +3795,6 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
+ 		set_opt(sb, NO_UID32);
+ 	/* xattr user namespace & acls are now defaulted on */
+ 	set_opt(sb, XATTR_USER);
+-	set_opt(sb, DIOREAD_NOLOCK);
+ #ifdef CONFIG_EXT4_FS_POSIX_ACL
+ 	set_opt(sb, POSIX_ACL);
+ #endif
+@@ -3837,6 +3844,10 @@ static int ext4_fill_super(struct super_block *sb, void *data, int silent)
+ 	sbi->s_li_wait_mult = EXT4_DEF_LI_WAIT_MULT;
+ 
+ 	blocksize = BLOCK_SIZE << le32_to_cpu(es->s_log_block_size);
++
++	if (blocksize == PAGE_SIZE)
++		set_opt(sb, DIOREAD_NOLOCK);
++
+ 	if (blocksize < EXT4_MIN_BLOCK_SIZE ||
+ 	    blocksize > EXT4_MAX_BLOCK_SIZE) {
+ 		ext4_msg(sb, KERN_ERR,
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 38b25f599896..9690c845a3e4 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -696,8 +696,6 @@ static const struct io_op_def io_op_defs[] = {
+ 		.needs_file		= 1,
+ 	},
+ 	[IORING_OP_OPENAT] = {
+-		.needs_file		= 1,
+-		.fd_non_neg		= 1,
+ 		.file_table		= 1,
+ 		.needs_fs		= 1,
+ 	},
+@@ -711,8 +709,6 @@ static const struct io_op_def io_op_defs[] = {
+ 	},
+ 	[IORING_OP_STATX] = {
+ 		.needs_mm		= 1,
+-		.needs_file		= 1,
+-		.fd_non_neg		= 1,
+ 		.needs_fs		= 1,
+ 		.file_table		= 1,
+ 	},
+@@ -743,8 +739,6 @@ static const struct io_op_def io_op_defs[] = {
+ 		.unbound_nonreg_file	= 1,
+ 	},
+ 	[IORING_OP_OPENAT2] = {
+-		.needs_file		= 1,
+-		.fd_non_neg		= 1,
+ 		.file_table		= 1,
+ 		.needs_fs		= 1,
+ 	},
+diff --git a/fs/notify/fanotify/fanotify.c b/fs/notify/fanotify/fanotify.c
+index 5778d1347b35..f5d30573f4a9 100644
+--- a/fs/notify/fanotify/fanotify.c
++++ b/fs/notify/fanotify/fanotify.c
+@@ -26,7 +26,7 @@ static bool should_merge(struct fsnotify_event *old_fsn,
+ 	old = FANOTIFY_E(old_fsn);
+ 	new = FANOTIFY_E(new_fsn);
+ 
+-	if (old_fsn->inode != new_fsn->inode || old->pid != new->pid ||
++	if (old_fsn->objectid != new_fsn->objectid || old->pid != new->pid ||
+ 	    old->fh_type != new->fh_type || old->fh_len != new->fh_len)
+ 		return false;
+ 
+@@ -314,7 +314,12 @@ struct fanotify_event *fanotify_alloc_event(struct fsnotify_group *group,
+ 	if (!event)
+ 		goto out;
+ init: __maybe_unused
+-	fsnotify_init_event(&event->fse, inode);
++	/*
++	 * Use the victim inode instead of the watching inode as the id for
++	 * event queue, so event reported on parent is merged with event
++	 * reported on child when both directory and child watches exist.
++	 */
++	fsnotify_init_event(&event->fse, (unsigned long)id);
+ 	event->mask = mask;
+ 	if (FAN_GROUP_FLAG(group, FAN_REPORT_TID))
+ 		event->pid = get_pid(task_pid(current));
+diff --git a/fs/notify/inotify/inotify_fsnotify.c b/fs/notify/inotify/inotify_fsnotify.c
+index d510223d302c..589dee962993 100644
+--- a/fs/notify/inotify/inotify_fsnotify.c
++++ b/fs/notify/inotify/inotify_fsnotify.c
+@@ -39,7 +39,7 @@ static bool event_compare(struct fsnotify_event *old_fsn,
+ 	if (old->mask & FS_IN_IGNORED)
+ 		return false;
+ 	if ((old->mask == new->mask) &&
+-	    (old_fsn->inode == new_fsn->inode) &&
++	    (old_fsn->objectid == new_fsn->objectid) &&
+ 	    (old->name_len == new->name_len) &&
+ 	    (!old->name_len || !strcmp(old->name, new->name)))
+ 		return true;
+@@ -118,7 +118,7 @@ int inotify_handle_event(struct fsnotify_group *group,
+ 		mask &= ~IN_ISDIR;
+ 
+ 	fsn_event = &event->fse;
+-	fsnotify_init_event(fsn_event, inode);
++	fsnotify_init_event(fsn_event, (unsigned long)inode);
+ 	event->mask = mask;
+ 	event->wd = i_mark->wd;
+ 	event->sync_cookie = cookie;
+diff --git a/fs/notify/inotify/inotify_user.c b/fs/notify/inotify/inotify_user.c
+index 107537a543fd..81ffc8629fc4 100644
+--- a/fs/notify/inotify/inotify_user.c
++++ b/fs/notify/inotify/inotify_user.c
+@@ -635,7 +635,7 @@ static struct fsnotify_group *inotify_new_group(unsigned int max_events)
+ 		return ERR_PTR(-ENOMEM);
+ 	}
+ 	group->overflow_event = &oevent->fse;
+-	fsnotify_init_event(group->overflow_event, NULL);
++	fsnotify_init_event(group->overflow_event, 0);
+ 	oevent->mask = FS_Q_OVERFLOW;
+ 	oevent->wd = -1;
+ 	oevent->sync_cookie = 0;
+diff --git a/include/linux/amba/bus.h b/include/linux/amba/bus.h
+index 26f0ecf401ea..0bbfd647f5c6 100644
+--- a/include/linux/amba/bus.h
++++ b/include/linux/amba/bus.h
+@@ -65,6 +65,7 @@ struct amba_device {
+ 	struct device		dev;
+ 	struct resource		res;
+ 	struct clk		*pclk;
++	struct device_dma_parameters dma_parms;
+ 	unsigned int		periphid;
+ 	unsigned int		cid;
+ 	struct amba_cs_uci_id	uci;
+diff --git a/include/linux/backing-dev-defs.h b/include/linux/backing-dev-defs.h
+index 4fc87dee005a..2849bdbb3acb 100644
+--- a/include/linux/backing-dev-defs.h
++++ b/include/linux/backing-dev-defs.h
+@@ -220,6 +220,7 @@ struct backing_dev_info {
+ 	wait_queue_head_t wb_waitq;
+ 
+ 	struct device *dev;
++	char dev_name[64];
+ 	struct device *owner;
+ 
+ 	struct timer_list laptop_mode_wb_timer;
+diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
+index f88197c1ffc2..c9ad5c3b7b4b 100644
+--- a/include/linux/backing-dev.h
++++ b/include/linux/backing-dev.h
+@@ -505,13 +505,6 @@ static inline int bdi_rw_congested(struct backing_dev_info *bdi)
+ 				  (1 << WB_async_congested));
+ }
+ 
+-extern const char *bdi_unknown_name;
+-
+-static inline const char *bdi_dev_name(struct backing_dev_info *bdi)
+-{
+-	if (!bdi || !bdi->dev)
+-		return bdi_unknown_name;
+-	return dev_name(bdi->dev);
+-}
++const char *bdi_dev_name(struct backing_dev_info *bdi);
+ 
+ #endif	/* _LINUX_BACKING_DEV_H */
+diff --git a/include/linux/fsnotify_backend.h b/include/linux/fsnotify_backend.h
+index 1915bdba2fad..64cfb5446f4d 100644
+--- a/include/linux/fsnotify_backend.h
++++ b/include/linux/fsnotify_backend.h
+@@ -133,8 +133,7 @@ struct fsnotify_ops {
+  */
+ struct fsnotify_event {
+ 	struct list_head list;
+-	/* inode may ONLY be dereferenced during handle_event(). */
+-	struct inode *inode;	/* either the inode the event happened to or its parent */
++	unsigned long objectid;	/* identifier for queue merges */
+ };
+ 
+ /*
+@@ -500,10 +499,10 @@ extern void fsnotify_finish_user_wait(struct fsnotify_iter_info *iter_info);
+ extern bool fsnotify_prepare_user_wait(struct fsnotify_iter_info *iter_info);
+ 
+ static inline void fsnotify_init_event(struct fsnotify_event *event,
+-				       struct inode *inode)
++				       unsigned long objectid)
+ {
+ 	INIT_LIST_HEAD(&event->list);
+-	event->inode = inode;
++	event->objectid = objectid;
+ }
+ 
+ #else
+diff --git a/include/linux/platform_device.h b/include/linux/platform_device.h
+index 041bfa412aa0..81900b3cbe37 100644
+--- a/include/linux/platform_device.h
++++ b/include/linux/platform_device.h
+@@ -25,6 +25,7 @@ struct platform_device {
+ 	bool		id_auto;
+ 	struct device	dev;
+ 	u64		platform_dma_mask;
++	struct device_dma_parameters dma_parms;
+ 	u32		num_resources;
+ 	struct resource	*resource;
+ 
+diff --git a/include/linux/virtio_net.h b/include/linux/virtio_net.h
+index 0d1fe9297ac6..6f6ade63b04c 100644
+--- a/include/linux/virtio_net.h
++++ b/include/linux/virtio_net.h
+@@ -3,6 +3,8 @@
+ #define _LINUX_VIRTIO_NET_H
+ 
+ #include <linux/if_vlan.h>
++#include <uapi/linux/tcp.h>
++#include <uapi/linux/udp.h>
+ #include <uapi/linux/virtio_net.h>
+ 
+ static inline int virtio_net_hdr_set_proto(struct sk_buff *skb,
+@@ -28,17 +30,25 @@ static inline int virtio_net_hdr_to_skb(struct sk_buff *skb,
+ 					bool little_endian)
+ {
+ 	unsigned int gso_type = 0;
++	unsigned int thlen = 0;
++	unsigned int ip_proto;
+ 
+ 	if (hdr->gso_type != VIRTIO_NET_HDR_GSO_NONE) {
+ 		switch (hdr->gso_type & ~VIRTIO_NET_HDR_GSO_ECN) {
+ 		case VIRTIO_NET_HDR_GSO_TCPV4:
+ 			gso_type = SKB_GSO_TCPV4;
++			ip_proto = IPPROTO_TCP;
++			thlen = sizeof(struct tcphdr);
+ 			break;
+ 		case VIRTIO_NET_HDR_GSO_TCPV6:
+ 			gso_type = SKB_GSO_TCPV6;
++			ip_proto = IPPROTO_TCP;
++			thlen = sizeof(struct tcphdr);
+ 			break;
+ 		case VIRTIO_NET_HDR_GSO_UDP:
+ 			gso_type = SKB_GSO_UDP;
++			ip_proto = IPPROTO_UDP;
++			thlen = sizeof(struct udphdr);
+ 			break;
+ 		default:
+ 			return -EINVAL;
+@@ -57,16 +67,22 @@ static inline int virtio_net_hdr_to_skb(struct sk_buff *skb,
+ 
+ 		if (!skb_partial_csum_set(skb, start, off))
+ 			return -EINVAL;
++
++		if (skb_transport_offset(skb) + thlen > skb_headlen(skb))
++			return -EINVAL;
+ 	} else {
+ 		/* gso packets without NEEDS_CSUM do not set transport_offset.
+ 		 * probe and drop if does not match one of the above types.
+ 		 */
+ 		if (gso_type && skb->network_header) {
++			struct flow_keys_basic keys;
++
+ 			if (!skb->protocol)
+ 				virtio_net_hdr_set_proto(skb, hdr);
+ retry:
+-			skb_probe_transport_header(skb);
+-			if (!skb_transport_header_was_set(skb)) {
++			if (!skb_flow_dissect_flow_keys_basic(NULL, skb, &keys,
++							      NULL, 0, 0, 0,
++							      0)) {
+ 				/* UFO does not specify ipv4 or 6: try both */
+ 				if (gso_type & SKB_GSO_UDP &&
+ 				    skb->protocol == htons(ETH_P_IP)) {
+@@ -75,6 +91,12 @@ retry:
+ 				}
+ 				return -EINVAL;
+ 			}
++
++			if (keys.control.thoff + thlen > skb_headlen(skb) ||
++			    keys.basic.ip_proto != ip_proto)
++				return -EINVAL;
++
++			skb_set_transport_header(skb, keys.control.thoff);
+ 		}
+ 	}
+ 
+diff --git a/include/net/inet_ecn.h b/include/net/inet_ecn.h
+index c8e2bebd8d93..0f0d1efe06dd 100644
+--- a/include/net/inet_ecn.h
++++ b/include/net/inet_ecn.h
+@@ -99,6 +99,20 @@ static inline int IP_ECN_set_ce(struct iphdr *iph)
+ 	return 1;
+ }
+ 
++static inline int IP_ECN_set_ect1(struct iphdr *iph)
++{
++	u32 check = (__force u32)iph->check;
++
++	if ((iph->tos & INET_ECN_MASK) != INET_ECN_ECT_0)
++		return 0;
++
++	check += (__force u16)htons(0x100);
++
++	iph->check = (__force __sum16)(check + (check>=0xFFFF));
++	iph->tos ^= INET_ECN_MASK;
++	return 1;
++}
++
+ static inline void IP_ECN_clear(struct iphdr *iph)
+ {
+ 	iph->tos &= ~INET_ECN_MASK;
+@@ -134,6 +148,22 @@ static inline int IP6_ECN_set_ce(struct sk_buff *skb, struct ipv6hdr *iph)
+ 	return 1;
+ }
+ 
++static inline int IP6_ECN_set_ect1(struct sk_buff *skb, struct ipv6hdr *iph)
++{
++	__be32 from, to;
++
++	if ((ipv6_get_dsfield(iph) & INET_ECN_MASK) != INET_ECN_ECT_0)
++		return 0;
++
++	from = *(__be32 *)iph;
++	to = from ^ htonl(INET_ECN_MASK << 20);
++	*(__be32 *)iph = to;
++	if (skb->ip_summed == CHECKSUM_COMPLETE)
++		skb->csum = csum_add(csum_sub(skb->csum, (__force __wsum)from),
++				     (__force __wsum)to);
++	return 1;
++}
++
+ static inline void ipv6_copy_dscp(unsigned int dscp, struct ipv6hdr *inner)
+ {
+ 	dscp &= ~INET_ECN_MASK;
+@@ -159,6 +189,25 @@ static inline int INET_ECN_set_ce(struct sk_buff *skb)
+ 	return 0;
+ }
+ 
++static inline int INET_ECN_set_ect1(struct sk_buff *skb)
++{
++	switch (skb->protocol) {
++	case cpu_to_be16(ETH_P_IP):
++		if (skb_network_header(skb) + sizeof(struct iphdr) <=
++		    skb_tail_pointer(skb))
++			return IP_ECN_set_ect1(ip_hdr(skb));
++		break;
++
++	case cpu_to_be16(ETH_P_IPV6):
++		if (skb_network_header(skb) + sizeof(struct ipv6hdr) <=
++		    skb_tail_pointer(skb))
++			return IP6_ECN_set_ect1(skb, ipv6_hdr(skb));
++		break;
++	}
++
++	return 0;
++}
++
+ /*
+  * RFC 6040 4.2
+  *  To decapsulate the inner header at the tunnel egress, a compliant
+@@ -208,8 +257,12 @@ static inline int INET_ECN_decapsulate(struct sk_buff *skb,
+ 	int rc;
+ 
+ 	rc = __INET_ECN_decapsulate(outer, inner, &set_ce);
+-	if (!rc && set_ce)
+-		INET_ECN_set_ce(skb);
++	if (!rc) {
++		if (set_ce)
++			INET_ECN_set_ce(skb);
++		else if ((outer & INET_ECN_MASK) == INET_ECN_ECT_1)
++			INET_ECN_set_ect1(skb);
++	}
+ 
+ 	return rc;
+ }
+diff --git a/include/net/ip6_fib.h b/include/net/ip6_fib.h
+index fd60a8ac02ee..98ec56e2fae2 100644
+--- a/include/net/ip6_fib.h
++++ b/include/net/ip6_fib.h
+@@ -204,6 +204,7 @@ struct fib6_info {
+ struct rt6_info {
+ 	struct dst_entry		dst;
+ 	struct fib6_info __rcu		*from;
++	int				sernum;
+ 
+ 	struct rt6key			rt6i_dst;
+ 	struct rt6key			rt6i_src;
+@@ -292,6 +293,9 @@ static inline u32 rt6_get_cookie(const struct rt6_info *rt)
+ 	struct fib6_info *from;
+ 	u32 cookie = 0;
+ 
++	if (rt->sernum)
++		return rt->sernum;
++
+ 	rcu_read_lock();
+ 
+ 	from = rcu_dereference(rt->from);
+diff --git a/include/net/net_namespace.h b/include/net/net_namespace.h
+index 854d39ef1ca3..9cdb67e3a553 100644
+--- a/include/net/net_namespace.h
++++ b/include/net/net_namespace.h
+@@ -432,6 +432,13 @@ static inline int rt_genid_ipv4(const struct net *net)
+ 	return atomic_read(&net->ipv4.rt_genid);
+ }
+ 
++#if IS_ENABLED(CONFIG_IPV6)
++static inline int rt_genid_ipv6(const struct net *net)
++{
++	return atomic_read(&net->ipv6.fib6_sernum);
++}
++#endif
++
+ static inline void rt_genid_bump_ipv4(struct net *net)
+ {
+ 	atomic_inc(&net->ipv4.rt_genid);
+diff --git a/ipc/mqueue.c b/ipc/mqueue.c
+index 49a05ba3000d..3ba0ea3d5920 100644
+--- a/ipc/mqueue.c
++++ b/ipc/mqueue.c
+@@ -142,6 +142,7 @@ struct mqueue_inode_info {
+ 
+ 	struct sigevent notify;
+ 	struct pid *notify_owner;
++	u32 notify_self_exec_id;
+ 	struct user_namespace *notify_user_ns;
+ 	struct user_struct *user;	/* user who created, for accounting */
+ 	struct sock *notify_sock;
+@@ -774,28 +775,44 @@ static void __do_notify(struct mqueue_inode_info *info)
+ 	 * synchronously. */
+ 	if (info->notify_owner &&
+ 	    info->attr.mq_curmsgs == 1) {
+-		struct kernel_siginfo sig_i;
+ 		switch (info->notify.sigev_notify) {
+ 		case SIGEV_NONE:
+ 			break;
+-		case SIGEV_SIGNAL:
+-			/* sends signal */
++		case SIGEV_SIGNAL: {
++			struct kernel_siginfo sig_i;
++			struct task_struct *task;
++
++			/* do_mq_notify() accepts sigev_signo == 0, why?? */
++			if (!info->notify.sigev_signo)
++				break;
+ 
+ 			clear_siginfo(&sig_i);
+ 			sig_i.si_signo = info->notify.sigev_signo;
+ 			sig_i.si_errno = 0;
+ 			sig_i.si_code = SI_MESGQ;
+ 			sig_i.si_value = info->notify.sigev_value;
+-			/* map current pid/uid into info->owner's namespaces */
+ 			rcu_read_lock();
++			/* map current pid/uid into info->owner's namespaces */
+ 			sig_i.si_pid = task_tgid_nr_ns(current,
+ 						ns_of_pid(info->notify_owner));
+-			sig_i.si_uid = from_kuid_munged(info->notify_user_ns, current_uid());
++			sig_i.si_uid = from_kuid_munged(info->notify_user_ns,
++						current_uid());
++			/*
++			 * We can't use kill_pid_info(), this signal should
++			 * bypass check_kill_permission(). It is from kernel
++			 * but si_fromuser() can't know this.
++			 * We do check the self_exec_id, to avoid sending
++			 * signals to programs that don't expect them.
++			 */
++			task = pid_task(info->notify_owner, PIDTYPE_TGID);
++			if (task && task->self_exec_id ==
++						info->notify_self_exec_id) {
++				do_send_sig_info(info->notify.sigev_signo,
++						&sig_i, task, PIDTYPE_TGID);
++			}
+ 			rcu_read_unlock();
+-
+-			kill_pid_info(info->notify.sigev_signo,
+-				      &sig_i, info->notify_owner);
+ 			break;
++		}
+ 		case SIGEV_THREAD:
+ 			set_cookie(info->notify_cookie, NOTIFY_WOKENUP);
+ 			netlink_sendskb(info->notify_sock, info->notify_cookie);
+@@ -1384,6 +1401,7 @@ retry:
+ 			info->notify.sigev_signo = notification->sigev_signo;
+ 			info->notify.sigev_value = notification->sigev_value;
+ 			info->notify.sigev_notify = SIGEV_SIGNAL;
++			info->notify_self_exec_id = current->self_exec_id;
+ 			break;
+ 		}
+ 
+diff --git a/kernel/trace/preemptirq_delay_test.c b/kernel/trace/preemptirq_delay_test.c
+index 31c0fad4cb9e..c4c86de63cf9 100644
+--- a/kernel/trace/preemptirq_delay_test.c
++++ b/kernel/trace/preemptirq_delay_test.c
+@@ -113,22 +113,42 @@ static int preemptirq_delay_run(void *data)
+ 
+ 	for (i = 0; i < s; i++)
+ 		(testfuncs[i])(i);
++
++	set_current_state(TASK_INTERRUPTIBLE);
++	while (!kthread_should_stop()) {
++		schedule();
++		set_current_state(TASK_INTERRUPTIBLE);
++	}
++
++	__set_current_state(TASK_RUNNING);
++
+ 	return 0;
+ }
+ 
+-static struct task_struct *preemptirq_start_test(void)
++static int preemptirq_run_test(void)
+ {
++	struct task_struct *task;
++
+ 	char task_name[50];
+ 
+ 	snprintf(task_name, sizeof(task_name), "%s_test", test_mode);
+-	return kthread_run(preemptirq_delay_run, NULL, task_name);
++	task =  kthread_run(preemptirq_delay_run, NULL, task_name);
++	if (IS_ERR(task))
++		return PTR_ERR(task);
++	if (task)
++		kthread_stop(task);
++	return 0;
+ }
+ 
+ 
+ static ssize_t trigger_store(struct kobject *kobj, struct kobj_attribute *attr,
+ 			 const char *buf, size_t count)
+ {
+-	preemptirq_start_test();
++	ssize_t ret;
++
++	ret = preemptirq_run_test();
++	if (ret)
++		return ret;
+ 	return count;
+ }
+ 
+@@ -148,11 +168,9 @@ static struct kobject *preemptirq_delay_kobj;
+ 
+ static int __init preemptirq_delay_init(void)
+ {
+-	struct task_struct *test_task;
+ 	int retval;
+ 
+-	test_task = preemptirq_start_test();
+-	retval = PTR_ERR_OR_ZERO(test_task);
++	retval = preemptirq_run_test();
+ 	if (retval != 0)
+ 		return retval;
+ 
+diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
+index 6b11e4e2150c..5f0aa5d66e22 100644
+--- a/kernel/trace/trace.c
++++ b/kernel/trace/trace.c
+@@ -8452,6 +8452,19 @@ static int allocate_trace_buffers(struct trace_array *tr, int size)
+ 	 */
+ 	allocate_snapshot = false;
+ #endif
++
++	/*
++	 * Because of some magic with the way alloc_percpu() works on
++	 * x86_64, we need to synchronize the pgd of all the tables,
++	 * otherwise the trace events that happen in x86_64 page fault
++	 * handlers can't cope with accessing the chance that a
++	 * alloc_percpu()'d memory might be touched in the page fault trace
++	 * event. Oh, and we need to audit all other alloc_percpu() and vmalloc()
++	 * calls in tracing, because something might get triggered within a
++	 * page fault trace event!
++	 */
++	vmalloc_sync_mappings();
++
+ 	return 0;
+ }
+ 
+diff --git a/kernel/trace/trace_boot.c b/kernel/trace/trace_boot.c
+index 06d7feb5255f..9de29bb45a27 100644
+--- a/kernel/trace/trace_boot.c
++++ b/kernel/trace/trace_boot.c
+@@ -95,24 +95,20 @@ trace_boot_add_kprobe_event(struct xbc_node *node, const char *event)
+ 	struct xbc_node *anode;
+ 	char buf[MAX_BUF_LEN];
+ 	const char *val;
+-	int ret;
++	int ret = 0;
+ 
+-	kprobe_event_cmd_init(&cmd, buf, MAX_BUF_LEN);
++	xbc_node_for_each_array_value(node, "probes", anode, val) {
++		kprobe_event_cmd_init(&cmd, buf, MAX_BUF_LEN);
+ 
+-	ret = kprobe_event_gen_cmd_start(&cmd, event, NULL);
+-	if (ret)
+-		return ret;
++		ret = kprobe_event_gen_cmd_start(&cmd, event, val);
++		if (ret)
++			break;
+ 
+-	xbc_node_for_each_array_value(node, "probes", anode, val) {
+-		ret = kprobe_event_add_field(&cmd, val);
++		ret = kprobe_event_gen_cmd_end(&cmd);
+ 		if (ret)
+-			return ret;
++			pr_err("Failed to add probe: %s\n", buf);
+ 	}
+ 
+-	ret = kprobe_event_gen_cmd_end(&cmd);
+-	if (ret)
+-		pr_err("Failed to add probe: %s\n", buf);
+-
+ 	return ret;
+ }
+ #else
+diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
+index d0568af4a0ef..35989383ae11 100644
+--- a/kernel/trace/trace_kprobe.c
++++ b/kernel/trace/trace_kprobe.c
+@@ -453,7 +453,7 @@ static bool __within_notrace_func(unsigned long addr)
+ 
+ static bool within_notrace_func(struct trace_kprobe *tk)
+ {
+-	unsigned long addr = addr = trace_kprobe_address(tk);
++	unsigned long addr = trace_kprobe_address(tk);
+ 	char symname[KSYM_NAME_LEN], *p;
+ 
+ 	if (!__within_notrace_func(addr))
+@@ -940,6 +940,9 @@ EXPORT_SYMBOL_GPL(kprobe_event_cmd_init);
+  * complete command or only the first part of it; in the latter case,
+  * kprobe_event_add_fields() can be used to add more fields following this.
+  *
++ * Unlikely the synth_event_gen_cmd_start(), @loc must be specified. This
++ * returns -EINVAL if @loc == NULL.
++ *
+  * Return: 0 if successful, error otherwise.
+  */
+ int __kprobe_event_gen_cmd_start(struct dynevent_cmd *cmd, bool kretprobe,
+@@ -953,6 +956,9 @@ int __kprobe_event_gen_cmd_start(struct dynevent_cmd *cmd, bool kretprobe,
+ 	if (cmd->type != DYNEVENT_TYPE_KPROBE)
+ 		return -EINVAL;
+ 
++	if (!loc)
++		return -EINVAL;
++
+ 	if (kretprobe)
+ 		snprintf(buf, MAX_EVENT_NAME_LEN, "r:kprobes/%s", name);
+ 	else
+diff --git a/kernel/umh.c b/kernel/umh.c
+index 7f255b5a8845..11bf5eea474c 100644
+--- a/kernel/umh.c
++++ b/kernel/umh.c
+@@ -544,6 +544,11 @@ EXPORT_SYMBOL_GPL(fork_usermode_blob);
+  * Runs a user-space application.  The application is started
+  * asynchronously if wait is not set, and runs as a child of system workqueues.
+  * (ie. it runs with full root capabilities and optimized affinity).
++ *
++ * Note: successful return value does not guarantee the helper was called at
++ * all. You can't rely on sub_info->{init,cleanup} being called even for
++ * UMH_WAIT_* wait modes as STATIC_USERMODEHELPER_PATH="" turns all helpers
++ * into a successful no-op.
+  */
+ int call_usermodehelper_exec(struct subprocess_info *sub_info, int wait)
+ {
+diff --git a/mm/backing-dev.c b/mm/backing-dev.c
+index 62f05f605fb5..3f2480e4c5af 100644
+--- a/mm/backing-dev.c
++++ b/mm/backing-dev.c
+@@ -21,7 +21,7 @@ struct backing_dev_info noop_backing_dev_info = {
+ EXPORT_SYMBOL_GPL(noop_backing_dev_info);
+ 
+ static struct class *bdi_class;
+-const char *bdi_unknown_name = "(unknown)";
++static const char *bdi_unknown_name = "(unknown)";
+ 
+ /*
+  * bdi_lock protects bdi_tree and updates to bdi_list. bdi_list has RCU
+@@ -938,7 +938,8 @@ int bdi_register_va(struct backing_dev_info *bdi, const char *fmt, va_list args)
+ 	if (bdi->dev)	/* The driver needs to use separate queues per device */
+ 		return 0;
+ 
+-	dev = device_create_vargs(bdi_class, NULL, MKDEV(0, 0), bdi, fmt, args);
++	vsnprintf(bdi->dev_name, sizeof(bdi->dev_name), fmt, args);
++	dev = device_create(bdi_class, NULL, MKDEV(0, 0), bdi, bdi->dev_name);
+ 	if (IS_ERR(dev))
+ 		return PTR_ERR(dev);
+ 
+@@ -1043,6 +1044,14 @@ void bdi_put(struct backing_dev_info *bdi)
+ }
+ EXPORT_SYMBOL(bdi_put);
+ 
++const char *bdi_dev_name(struct backing_dev_info *bdi)
++{
++	if (!bdi || !bdi->dev)
++		return bdi_unknown_name;
++	return bdi->dev_name;
++}
++EXPORT_SYMBOL_GPL(bdi_dev_name);
++
+ static wait_queue_head_t congestion_wqh[2] = {
+ 		__WAIT_QUEUE_HEAD_INITIALIZER(congestion_wqh[0]),
+ 		__WAIT_QUEUE_HEAD_INITIALIZER(congestion_wqh[1])
+diff --git a/mm/memcontrol.c b/mm/memcontrol.c
+index 615d73acd0da..537eae162ed3 100644
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -4977,19 +4977,22 @@ static struct mem_cgroup *mem_cgroup_alloc(void)
+ 	unsigned int size;
+ 	int node;
+ 	int __maybe_unused i;
++	long error = -ENOMEM;
+ 
+ 	size = sizeof(struct mem_cgroup);
+ 	size += nr_node_ids * sizeof(struct mem_cgroup_per_node *);
+ 
+ 	memcg = kzalloc(size, GFP_KERNEL);
+ 	if (!memcg)
+-		return NULL;
++		return ERR_PTR(error);
+ 
+ 	memcg->id.id = idr_alloc(&mem_cgroup_idr, NULL,
+ 				 1, MEM_CGROUP_ID_MAX,
+ 				 GFP_KERNEL);
+-	if (memcg->id.id < 0)
++	if (memcg->id.id < 0) {
++		error = memcg->id.id;
+ 		goto fail;
++	}
+ 
+ 	memcg->vmstats_local = alloc_percpu(struct memcg_vmstats_percpu);
+ 	if (!memcg->vmstats_local)
+@@ -5033,7 +5036,7 @@ static struct mem_cgroup *mem_cgroup_alloc(void)
+ fail:
+ 	mem_cgroup_id_remove(memcg);
+ 	__mem_cgroup_free(memcg);
+-	return NULL;
++	return ERR_PTR(error);
+ }
+ 
+ static struct cgroup_subsys_state * __ref
+@@ -5044,8 +5047,8 @@ mem_cgroup_css_alloc(struct cgroup_subsys_state *parent_css)
+ 	long error = -ENOMEM;
+ 
+ 	memcg = mem_cgroup_alloc();
+-	if (!memcg)
+-		return ERR_PTR(error);
++	if (IS_ERR(memcg))
++		return ERR_CAST(memcg);
+ 
+ 	memcg->high = PAGE_COUNTER_MAX;
+ 	memcg->soft_limit = PAGE_COUNTER_MAX;
+@@ -5095,7 +5098,7 @@ mem_cgroup_css_alloc(struct cgroup_subsys_state *parent_css)
+ fail:
+ 	mem_cgroup_id_remove(memcg);
+ 	mem_cgroup_free(memcg);
+-	return ERR_PTR(-ENOMEM);
++	return ERR_PTR(error);
+ }
+ 
+ static int mem_cgroup_css_online(struct cgroup_subsys_state *css)
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index 3c4eb750a199..a97de355a13c 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -1555,6 +1555,7 @@ void set_zone_contiguous(struct zone *zone)
+ 		if (!__pageblock_pfn_to_page(block_start_pfn,
+ 					     block_end_pfn, zone))
+ 			return;
++		cond_resched();
+ 	}
+ 
+ 	/* We confirm that there is no hole */
+@@ -2350,6 +2351,14 @@ static inline void boost_watermark(struct zone *zone)
+ 
+ 	if (!watermark_boost_factor)
+ 		return;
++	/*
++	 * Don't bother in zones that are unlikely to produce results.
++	 * On small machines, including kdump capture kernels running
++	 * in a small area, boosting the watermark can cause an out of
++	 * memory situation immediately.
++	 */
++	if ((pageblock_nr_pages * 4) > zone_managed_pages(zone))
++		return;
+ 
+ 	max_boost = mult_frac(zone->_watermark[WMARK_HIGH],
+ 			watermark_boost_factor, 10000);
+diff --git a/net/batman-adv/bat_v_ogm.c b/net/batman-adv/bat_v_ogm.c
+index 969466218999..80b87b1f4e3a 100644
+--- a/net/batman-adv/bat_v_ogm.c
++++ b/net/batman-adv/bat_v_ogm.c
+@@ -893,7 +893,7 @@ static void batadv_v_ogm_process(const struct sk_buff *skb, int ogm_offset,
+ 
+ 	orig_node = batadv_v_ogm_orig_get(bat_priv, ogm_packet->orig);
+ 	if (!orig_node)
+-		return;
++		goto out;
+ 
+ 	neigh_node = batadv_neigh_node_get_or_create(orig_node, if_incoming,
+ 						     ethhdr->h_source);
+diff --git a/net/batman-adv/network-coding.c b/net/batman-adv/network-coding.c
+index 8f0717c3f7b5..b0469d15da0e 100644
+--- a/net/batman-adv/network-coding.c
++++ b/net/batman-adv/network-coding.c
+@@ -1009,15 +1009,8 @@ static struct batadv_nc_path *batadv_nc_get_path(struct batadv_priv *bat_priv,
+  */
+ static u8 batadv_nc_random_weight_tq(u8 tq)
+ {
+-	u8 rand_val, rand_tq;
+-
+-	get_random_bytes(&rand_val, sizeof(rand_val));
+-
+ 	/* randomize the estimated packet loss (max TQ - estimated TQ) */
+-	rand_tq = rand_val * (BATADV_TQ_MAX_VALUE - tq);
+-
+-	/* normalize the randomized packet loss */
+-	rand_tq /= BATADV_TQ_MAX_VALUE;
++	u8 rand_tq = prandom_u32_max(BATADV_TQ_MAX_VALUE + 1 - tq);
+ 
+ 	/* convert to (randomized) estimated tq again */
+ 	return BATADV_TQ_MAX_VALUE - rand_tq;
+diff --git a/net/batman-adv/sysfs.c b/net/batman-adv/sysfs.c
+index c45962d8527b..0f962dcd239e 100644
+--- a/net/batman-adv/sysfs.c
++++ b/net/batman-adv/sysfs.c
+@@ -1150,7 +1150,7 @@ static ssize_t batadv_store_throughput_override(struct kobject *kobj,
+ 	ret = batadv_parse_throughput(net_dev, buff, "throughput_override",
+ 				      &tp_override);
+ 	if (!ret)
+-		return count;
++		goto out;
+ 
+ 	old_tp_override = atomic_read(&hard_iface->bat_v.throughput_override);
+ 	if (old_tp_override == tp_override)
+@@ -1190,6 +1190,7 @@ static ssize_t batadv_show_throughput_override(struct kobject *kobj,
+ 
+ 	tp_override = atomic_read(&hard_iface->bat_v.throughput_override);
+ 
++	batadv_hardif_put(hard_iface);
+ 	return sprintf(buff, "%u.%u MBit\n", tp_override / 10,
+ 		       tp_override % 10);
+ }
+diff --git a/net/bridge/br_netlink.c b/net/bridge/br_netlink.c
+index 43dab4066f91..a0f5dbee8f9c 100644
+--- a/net/bridge/br_netlink.c
++++ b/net/bridge/br_netlink.c
+@@ -612,6 +612,7 @@ int br_process_vlan_info(struct net_bridge *br,
+ 					       v - 1, rtm_cmd);
+ 				v_change_start = 0;
+ 			}
++			cond_resched();
+ 		}
+ 		/* v_change_start is set only if the last/whole range changed */
+ 		if (v_change_start)
+diff --git a/net/core/devlink.c b/net/core/devlink.c
+index b831c5545d6a..b4e26b702352 100644
+--- a/net/core/devlink.c
++++ b/net/core/devlink.c
+@@ -4030,6 +4030,11 @@ static int devlink_nl_cmd_region_read_dumpit(struct sk_buff *skb,
+ 		end_offset = nla_get_u64(attrs[DEVLINK_ATTR_REGION_CHUNK_ADDR]);
+ 		end_offset += nla_get_u64(attrs[DEVLINK_ATTR_REGION_CHUNK_LEN]);
+ 		dump = false;
++
++		if (start_offset == end_offset) {
++			err = 0;
++			goto nla_put_failure;
++		}
+ 	}
+ 
+ 	err = devlink_nl_region_read_snapshot_fill(skb, devlink,
+@@ -5029,6 +5034,7 @@ int devlink_health_report(struct devlink_health_reporter *reporter,
+ {
+ 	enum devlink_health_reporter_state prev_health_state;
+ 	struct devlink *devlink = reporter->devlink;
++	unsigned long recover_ts_threshold;
+ 
+ 	/* write a log message of the current error */
+ 	WARN_ON(!msg);
+@@ -5039,10 +5045,12 @@ int devlink_health_report(struct devlink_health_reporter *reporter,
+ 	devlink_recover_notify(reporter, DEVLINK_CMD_HEALTH_REPORTER_RECOVER);
+ 
+ 	/* abort if the previous error wasn't recovered */
++	recover_ts_threshold = reporter->last_recovery_ts +
++			       msecs_to_jiffies(reporter->graceful_period);
+ 	if (reporter->auto_recover &&
+ 	    (prev_health_state != DEVLINK_HEALTH_REPORTER_STATE_HEALTHY ||
+-	     jiffies - reporter->last_recovery_ts <
+-	     msecs_to_jiffies(reporter->graceful_period))) {
++	     (reporter->last_recovery_ts && reporter->recovery_count &&
++	      time_is_after_jiffies(recover_ts_threshold)))) {
+ 		trace_devlink_health_recover_aborted(devlink,
+ 						     reporter->ops->name,
+ 						     reporter->health_state,
+diff --git a/net/core/neighbour.c b/net/core/neighbour.c
+index 789a73aa7bd8..04953e5f2530 100644
+--- a/net/core/neighbour.c
++++ b/net/core/neighbour.c
+@@ -1954,6 +1954,9 @@ static int neigh_add(struct sk_buff *skb, struct nlmsghdr *nlh,
+ 				   NEIGH_UPDATE_F_OVERRIDE_ISROUTER);
+ 	}
+ 
++	if (protocol)
++		neigh->protocol = protocol;
++
+ 	if (ndm->ndm_flags & NTF_EXT_LEARNED)
+ 		flags |= NEIGH_UPDATE_F_EXT_LEARNED;
+ 
+@@ -1967,9 +1970,6 @@ static int neigh_add(struct sk_buff *skb, struct nlmsghdr *nlh,
+ 		err = __neigh_update(neigh, lladdr, ndm->ndm_state, flags,
+ 				     NETLINK_CB(skb).portid, extack);
+ 
+-	if (protocol)
+-		neigh->protocol = protocol;
+-
+ 	neigh_release(neigh);
+ 
+ out:
+diff --git a/net/dsa/dsa2.c b/net/dsa/dsa2.c
+index e7c30b472034..154b639d27b8 100644
+--- a/net/dsa/dsa2.c
++++ b/net/dsa/dsa2.c
+@@ -459,7 +459,7 @@ static int dsa_tree_setup_switches(struct dsa_switch_tree *dst)
+ 	list_for_each_entry(dp, &dst->ports, list) {
+ 		err = dsa_port_setup(dp);
+ 		if (err)
+-			goto teardown;
++			continue;
+ 	}
+ 
+ 	return 0;
+diff --git a/net/dsa/master.c b/net/dsa/master.c
+index bd44bde272f4..4f5219e2e63c 100644
+--- a/net/dsa/master.c
++++ b/net/dsa/master.c
+@@ -289,7 +289,8 @@ static void dsa_master_ndo_teardown(struct net_device *dev)
+ {
+ 	struct dsa_port *cpu_dp = dev->dsa_ptr;
+ 
+-	dev->netdev_ops = cpu_dp->orig_ndo_ops;
++	if (cpu_dp->orig_ndo_ops)
++		dev->netdev_ops = cpu_dp->orig_ndo_ops;
+ 	cpu_dp->orig_ndo_ops = NULL;
+ }
+ 
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 2931224b674e..42d0596dd398 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -1388,9 +1388,18 @@ static struct rt6_info *ip6_rt_pcpu_alloc(const struct fib6_result *res)
+ 	}
+ 	ip6_rt_copy_init(pcpu_rt, res);
+ 	pcpu_rt->rt6i_flags |= RTF_PCPU;
++
++	if (f6i->nh)
++		pcpu_rt->sernum = rt_genid_ipv6(dev_net(dev));
++
+ 	return pcpu_rt;
+ }
+ 
++static bool rt6_is_valid(const struct rt6_info *rt6)
++{
++	return rt6->sernum == rt_genid_ipv6(dev_net(rt6->dst.dev));
++}
++
+ /* It should be called with rcu_read_lock() acquired */
+ static struct rt6_info *rt6_get_pcpu_route(const struct fib6_result *res)
+ {
+@@ -1398,6 +1407,19 @@ static struct rt6_info *rt6_get_pcpu_route(const struct fib6_result *res)
+ 
+ 	pcpu_rt = this_cpu_read(*res->nh->rt6i_pcpu);
+ 
++	if (pcpu_rt && pcpu_rt->sernum && !rt6_is_valid(pcpu_rt)) {
++		struct rt6_info *prev, **p;
++
++		p = this_cpu_ptr(res->nh->rt6i_pcpu);
++		prev = xchg(p, NULL);
++		if (prev) {
++			dst_dev_put(&prev->dst);
++			dst_release(&prev->dst);
++		}
++
++		pcpu_rt = NULL;
++	}
++
+ 	return pcpu_rt;
+ }
+ 
+@@ -2596,6 +2618,9 @@ static struct dst_entry *ip6_dst_check(struct dst_entry *dst, u32 cookie)
+ 
+ 	rt = container_of(dst, struct rt6_info, dst);
+ 
++	if (rt->sernum)
++		return rt6_is_valid(rt) ? dst : NULL;
++
+ 	rcu_read_lock();
+ 
+ 	/* All IPV6 dsts are created with ->obsolete set to the value
+diff --git a/net/netfilter/nf_nat_proto.c b/net/netfilter/nf_nat_proto.c
+index 3d816a1e5442..59151dc07fdc 100644
+--- a/net/netfilter/nf_nat_proto.c
++++ b/net/netfilter/nf_nat_proto.c
+@@ -68,15 +68,13 @@ static bool udp_manip_pkt(struct sk_buff *skb,
+ 			  enum nf_nat_manip_type maniptype)
+ {
+ 	struct udphdr *hdr;
+-	bool do_csum;
+ 
+ 	if (skb_ensure_writable(skb, hdroff + sizeof(*hdr)))
+ 		return false;
+ 
+ 	hdr = (struct udphdr *)(skb->data + hdroff);
+-	do_csum = hdr->check || skb->ip_summed == CHECKSUM_PARTIAL;
++	__udp_manip_pkt(skb, iphdroff, hdr, tuple, maniptype, !!hdr->check);
+ 
+-	__udp_manip_pkt(skb, iphdroff, hdr, tuple, maniptype, do_csum);
+ 	return true;
+ }
+ 
+diff --git a/net/netfilter/nfnetlink_osf.c b/net/netfilter/nfnetlink_osf.c
+index 9f5dea0064ea..916a3c7f9eaf 100644
+--- a/net/netfilter/nfnetlink_osf.c
++++ b/net/netfilter/nfnetlink_osf.c
+@@ -165,12 +165,12 @@ static bool nf_osf_match_one(const struct sk_buff *skb,
+ static const struct tcphdr *nf_osf_hdr_ctx_init(struct nf_osf_hdr_ctx *ctx,
+ 						const struct sk_buff *skb,
+ 						const struct iphdr *ip,
+-						unsigned char *opts)
++						unsigned char *opts,
++						struct tcphdr *_tcph)
+ {
+ 	const struct tcphdr *tcp;
+-	struct tcphdr _tcph;
+ 
+-	tcp = skb_header_pointer(skb, ip_hdrlen(skb), sizeof(struct tcphdr), &_tcph);
++	tcp = skb_header_pointer(skb, ip_hdrlen(skb), sizeof(struct tcphdr), _tcph);
+ 	if (!tcp)
+ 		return NULL;
+ 
+@@ -205,10 +205,11 @@ nf_osf_match(const struct sk_buff *skb, u_int8_t family,
+ 	int fmatch = FMATCH_WRONG;
+ 	struct nf_osf_hdr_ctx ctx;
+ 	const struct tcphdr *tcp;
++	struct tcphdr _tcph;
+ 
+ 	memset(&ctx, 0, sizeof(ctx));
+ 
+-	tcp = nf_osf_hdr_ctx_init(&ctx, skb, ip, opts);
++	tcp = nf_osf_hdr_ctx_init(&ctx, skb, ip, opts, &_tcph);
+ 	if (!tcp)
+ 		return false;
+ 
+@@ -265,10 +266,11 @@ bool nf_osf_find(const struct sk_buff *skb,
+ 	const struct nf_osf_finger *kf;
+ 	struct nf_osf_hdr_ctx ctx;
+ 	const struct tcphdr *tcp;
++	struct tcphdr _tcph;
+ 
+ 	memset(&ctx, 0, sizeof(ctx));
+ 
+-	tcp = nf_osf_hdr_ctx_init(&ctx, skb, ip, opts);
++	tcp = nf_osf_hdr_ctx_init(&ctx, skb, ip, opts, &_tcph);
+ 	if (!tcp)
+ 		return false;
+ 
+diff --git a/net/sched/sch_choke.c b/net/sched/sch_choke.c
+index a36974e9c601..1bcf8fbfd40e 100644
+--- a/net/sched/sch_choke.c
++++ b/net/sched/sch_choke.c
+@@ -323,7 +323,8 @@ static void choke_reset(struct Qdisc *sch)
+ 
+ 	sch->q.qlen = 0;
+ 	sch->qstats.backlog = 0;
+-	memset(q->tab, 0, (q->tab_mask + 1) * sizeof(struct sk_buff *));
++	if (q->tab)
++		memset(q->tab, 0, (q->tab_mask + 1) * sizeof(struct sk_buff *));
+ 	q->head = q->tail = 0;
+ 	red_restart(&q->vars);
+ }
+diff --git a/net/sched/sch_fq_codel.c b/net/sched/sch_fq_codel.c
+index 968519ff36e9..436160be9c18 100644
+--- a/net/sched/sch_fq_codel.c
++++ b/net/sched/sch_fq_codel.c
+@@ -416,7 +416,7 @@ static int fq_codel_change(struct Qdisc *sch, struct nlattr *opt,
+ 		q->quantum = max(256U, nla_get_u32(tb[TCA_FQ_CODEL_QUANTUM]));
+ 
+ 	if (tb[TCA_FQ_CODEL_DROP_BATCH_SIZE])
+-		q->drop_batch_size = min(1U, nla_get_u32(tb[TCA_FQ_CODEL_DROP_BATCH_SIZE]));
++		q->drop_batch_size = max(1U, nla_get_u32(tb[TCA_FQ_CODEL_DROP_BATCH_SIZE]));
+ 
+ 	if (tb[TCA_FQ_CODEL_MEMORY_LIMIT])
+ 		q->memory_limit = min(1U << 31, nla_get_u32(tb[TCA_FQ_CODEL_MEMORY_LIMIT]));
+diff --git a/net/sched/sch_sfq.c b/net/sched/sch_sfq.c
+index c787d4d46017..5a6def5e4e6d 100644
+--- a/net/sched/sch_sfq.c
++++ b/net/sched/sch_sfq.c
+@@ -637,6 +637,15 @@ static int sfq_change(struct Qdisc *sch, struct nlattr *opt)
+ 	if (ctl->divisor &&
+ 	    (!is_power_of_2(ctl->divisor) || ctl->divisor > 65536))
+ 		return -EINVAL;
++
++	/* slot->allot is a short, make sure quantum is not too big. */
++	if (ctl->quantum) {
++		unsigned int scaled = SFQ_ALLOT_SIZE(ctl->quantum);
++
++		if (scaled <= 0 || scaled > SHRT_MAX)
++			return -EINVAL;
++	}
++
+ 	if (ctl_v1 && !red_check_params(ctl_v1->qth_min, ctl_v1->qth_max,
+ 					ctl_v1->Wlog))
+ 		return -EINVAL;
+diff --git a/net/sched/sch_skbprio.c b/net/sched/sch_skbprio.c
+index 0fb10abf7579..7a5e4c454715 100644
+--- a/net/sched/sch_skbprio.c
++++ b/net/sched/sch_skbprio.c
+@@ -169,6 +169,9 @@ static int skbprio_change(struct Qdisc *sch, struct nlattr *opt,
+ {
+ 	struct tc_skbprio_qopt *ctl = nla_data(opt);
+ 
++	if (opt->nla_len != nla_attr_size(sizeof(*ctl)))
++		return -EINVAL;
++
+ 	sch->limit = ctl->limit;
+ 	return 0;
+ }
+diff --git a/net/sctp/sm_statefuns.c b/net/sctp/sm_statefuns.c
+index 6a16af4b1ef6..26788f4a3b9e 100644
+--- a/net/sctp/sm_statefuns.c
++++ b/net/sctp/sm_statefuns.c
+@@ -1865,7 +1865,7 @@ static enum sctp_disposition sctp_sf_do_dupcook_a(
+ 		 */
+ 		sctp_add_cmd_sf(commands, SCTP_CMD_REPLY, SCTP_CHUNK(repl));
+ 		return sctp_sf_do_9_2_start_shutdown(net, ep, asoc,
+-						     SCTP_ST_CHUNK(0), NULL,
++						     SCTP_ST_CHUNK(0), repl,
+ 						     commands);
+ 	} else {
+ 		sctp_add_cmd_sf(commands, SCTP_CMD_NEW_STATE,
+@@ -5470,7 +5470,7 @@ enum sctp_disposition sctp_sf_do_9_2_start_shutdown(
+ 	 * in the Cumulative TSN Ack field the last sequential TSN it
+ 	 * has received from the peer.
+ 	 */
+-	reply = sctp_make_shutdown(asoc, NULL);
++	reply = sctp_make_shutdown(asoc, arg);
+ 	if (!reply)
+ 		goto nomem;
+ 
+@@ -6068,7 +6068,7 @@ enum sctp_disposition sctp_sf_autoclose_timer_expire(
+ 	disposition = SCTP_DISPOSITION_CONSUME;
+ 	if (sctp_outq_is_empty(&asoc->outqueue)) {
+ 		disposition = sctp_sf_do_9_2_start_shutdown(net, ep, asoc, type,
+-							    arg, commands);
++							    NULL, commands);
+ 	}
+ 
+ 	return disposition;
+diff --git a/net/tipc/topsrv.c b/net/tipc/topsrv.c
+index 3a12fc18239b..73dbed0c4b6b 100644
+--- a/net/tipc/topsrv.c
++++ b/net/tipc/topsrv.c
+@@ -402,10 +402,11 @@ static int tipc_conn_rcv_from_sock(struct tipc_conn *con)
+ 		read_lock_bh(&sk->sk_callback_lock);
+ 		ret = tipc_conn_rcv_sub(srv, con, &s);
+ 		read_unlock_bh(&sk->sk_callback_lock);
++		if (!ret)
++			return 0;
+ 	}
+-	if (ret < 0)
+-		tipc_conn_close(con);
+ 
++	tipc_conn_close(con);
+ 	return ret;
+ }
+ 
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index c98e602a1a2d..e23f94a5549b 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -800,6 +800,8 @@ static int bpf_exec_tx_verdict(struct sk_msg *msg, struct sock *sk,
+ 			*copied -= sk_msg_free(sk, msg);
+ 			tls_free_open_rec(sk);
+ 		}
++		if (psock)
++			sk_psock_put(sk, psock);
+ 		return err;
+ 	}
+ more_data:
+@@ -2081,8 +2083,9 @@ static void tls_data_ready(struct sock *sk)
+ 	strp_data_ready(&ctx->strp);
+ 
+ 	psock = sk_psock_get(sk);
+-	if (psock && !list_empty(&psock->ingress_msg)) {
+-		ctx->saved_data_ready(sk);
++	if (psock) {
++		if (!list_empty(&psock->ingress_msg))
++			ctx->saved_data_ready(sk);
+ 		sk_psock_put(sk, psock);
+ 	}
+ }
+diff --git a/scripts/decodecode b/scripts/decodecode
+index ba8b8d5834e6..fbdb325cdf4f 100755
+--- a/scripts/decodecode
++++ b/scripts/decodecode
+@@ -126,7 +126,7 @@ faultlinenum=$(( $(wc -l $T.oo  | cut -d" " -f1) - \
+ faultline=`cat $T.dis | head -1 | cut -d":" -f2-`
+ faultline=`echo "$faultline" | sed -e 's/\[/\\\[/g; s/\]/\\\]/g'`
+ 
+-cat $T.oo | sed -e "${faultlinenum}s/^\(.*:\)\(.*\)/\1\*\2\t\t<-- trapping instruction/"
++cat $T.oo | sed -e "${faultlinenum}s/^\([^:]*:\)\(.*\)/\1\*\2\t\t<-- trapping instruction/"
+ echo
+ cat $T.aa
+ cleanup
+diff --git a/tools/cgroup/iocost_monitor.py b/tools/cgroup/iocost_monitor.py
+index 7427a5ee761b..9d8e9613008a 100644
+--- a/tools/cgroup/iocost_monitor.py
++++ b/tools/cgroup/iocost_monitor.py
+@@ -159,7 +159,12 @@ class IocgStat:
+         else:
+             self.inflight_pct = 0
+ 
+-        self.debt_ms = iocg.abs_vdebt.counter.value_() / VTIME_PER_USEC / 1000
++        # vdebt used to be an atomic64_t and is now u64, support both
++        try:
++            self.debt_ms = iocg.abs_vdebt.counter.value_() / VTIME_PER_USEC / 1000
++        except:
++            self.debt_ms = iocg.abs_vdebt.value_() / VTIME_PER_USEC / 1000
++
+         self.use_delay = blkg.use_delay.counter.value_()
+         self.delay_ms = blkg.delay_nsec.counter.value_() / 1_000_000
+ 
+diff --git a/tools/objtool/check.c b/tools/objtool/check.c
+index 95c485d3d4d8..f9ffb548b4fa 100644
+--- a/tools/objtool/check.c
++++ b/tools/objtool/check.c
+@@ -1403,7 +1403,7 @@ static int update_insn_state_regs(struct instruction *insn, struct insn_state *s
+ 	struct cfi_reg *cfa = &state->cfa;
+ 	struct stack_op *op = &insn->stack_op;
+ 
+-	if (cfa->base != CFI_SP)
++	if (cfa->base != CFI_SP && cfa->base != CFI_SP_INDIRECT)
+ 		return 0;
+ 
+ 	/* push */
+diff --git a/tools/testing/selftests/net/tcp_mmap.c b/tools/testing/selftests/net/tcp_mmap.c
+index 35505b31e5cc..4555f88252ba 100644
+--- a/tools/testing/selftests/net/tcp_mmap.c
++++ b/tools/testing/selftests/net/tcp_mmap.c
+@@ -165,9 +165,10 @@ void *child_thread(void *arg)
+ 			socklen_t zc_len = sizeof(zc);
+ 			int res;
+ 
++			memset(&zc, 0, sizeof(zc));
+ 			zc.address = (__u64)((unsigned long)addr);
+ 			zc.length = chunk_size;
+-			zc.recv_skip_hint = 0;
++
+ 			res = getsockopt(fd, IPPROTO_TCP, TCP_ZEROCOPY_RECEIVE,
+ 					 &zc, &zc_len);
+ 			if (res == -1)
+@@ -281,12 +282,14 @@ static void setup_sockaddr(int domain, const char *str_addr,
+ static void do_accept(int fdlisten)
+ {
+ 	pthread_attr_t attr;
++	int rcvlowat;
+ 
+ 	pthread_attr_init(&attr);
+ 	pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_DETACHED);
+ 
++	rcvlowat = chunk_size;
+ 	if (setsockopt(fdlisten, SOL_SOCKET, SO_RCVLOWAT,
+-		       &chunk_size, sizeof(chunk_size)) == -1) {
++		       &rcvlowat, sizeof(rcvlowat)) == -1) {
+ 		perror("setsockopt SO_RCVLOWAT");
+ 	}
+ 
+diff --git a/tools/testing/selftests/wireguard/netns.sh b/tools/testing/selftests/wireguard/netns.sh
+index 936e1ca9410e..17a1f53ceba0 100755
+--- a/tools/testing/selftests/wireguard/netns.sh
++++ b/tools/testing/selftests/wireguard/netns.sh
+@@ -48,8 +48,11 @@ cleanup() {
+ 	exec 2>/dev/null
+ 	printf "$orig_message_cost" > /proc/sys/net/core/message_cost
+ 	ip0 link del dev wg0
++	ip0 link del dev wg1
+ 	ip1 link del dev wg0
++	ip1 link del dev wg1
+ 	ip2 link del dev wg0
++	ip2 link del dev wg1
+ 	local to_kill="$(ip netns pids $netns0) $(ip netns pids $netns1) $(ip netns pids $netns2)"
+ 	[[ -n $to_kill ]] && kill $to_kill
+ 	pp ip netns del $netns1
+@@ -77,18 +80,20 @@ ip0 link set wg0 netns $netns2
+ key1="$(pp wg genkey)"
+ key2="$(pp wg genkey)"
+ key3="$(pp wg genkey)"
++key4="$(pp wg genkey)"
+ pub1="$(pp wg pubkey <<<"$key1")"
+ pub2="$(pp wg pubkey <<<"$key2")"
+ pub3="$(pp wg pubkey <<<"$key3")"
++pub4="$(pp wg pubkey <<<"$key4")"
+ psk="$(pp wg genpsk)"
+ [[ -n $key1 && -n $key2 && -n $psk ]]
+ 
+ configure_peers() {
+ 	ip1 addr add 192.168.241.1/24 dev wg0
+-	ip1 addr add fd00::1/24 dev wg0
++	ip1 addr add fd00::1/112 dev wg0
+ 
+ 	ip2 addr add 192.168.241.2/24 dev wg0
+-	ip2 addr add fd00::2/24 dev wg0
++	ip2 addr add fd00::2/112 dev wg0
+ 
+ 	n1 wg set wg0 \
+ 		private-key <(echo "$key1") \
+@@ -230,9 +235,38 @@ n1 ping -W 1 -c 1 192.168.241.2
+ n1 wg set wg0 private-key <(echo "$key3")
+ n2 wg set wg0 peer "$pub3" preshared-key <(echo "$psk") allowed-ips 192.168.241.1/32 peer "$pub1" remove
+ n1 ping -W 1 -c 1 192.168.241.2
++n2 wg set wg0 peer "$pub3" remove
++
++# Test that we can route wg through wg
++ip1 addr flush dev wg0
++ip2 addr flush dev wg0
++ip1 addr add fd00::5:1/112 dev wg0
++ip2 addr add fd00::5:2/112 dev wg0
++n1 wg set wg0 private-key <(echo "$key1") peer "$pub2" preshared-key <(echo "$psk") allowed-ips fd00::5:2/128 endpoint 127.0.0.1:2
++n2 wg set wg0 private-key <(echo "$key2") listen-port 2 peer "$pub1" preshared-key <(echo "$psk") allowed-ips fd00::5:1/128 endpoint 127.212.121.99:9998
++ip1 link add wg1 type wireguard
++ip2 link add wg1 type wireguard
++ip1 addr add 192.168.241.1/24 dev wg1
++ip1 addr add fd00::1/112 dev wg1
++ip2 addr add 192.168.241.2/24 dev wg1
++ip2 addr add fd00::2/112 dev wg1
++ip1 link set mtu 1340 up dev wg1
++ip2 link set mtu 1340 up dev wg1
++n1 wg set wg1 listen-port 5 private-key <(echo "$key3") peer "$pub4" allowed-ips 192.168.241.2/32,fd00::2/128 endpoint [fd00::5:2]:5
++n2 wg set wg1 listen-port 5 private-key <(echo "$key4") peer "$pub3" allowed-ips 192.168.241.1/32,fd00::1/128 endpoint [fd00::5:1]:5
++tests
++# Try to set up a routing loop between the two namespaces
++ip1 link set netns $netns0 dev wg1
++ip0 addr add 192.168.241.1/24 dev wg1
++ip0 link set up dev wg1
++n0 ping -W 1 -c 1 192.168.241.2
++n1 wg set wg0 peer "$pub2" endpoint 192.168.241.2:7
++ip2 link del wg0
++ip2 link del wg1
++! n0 ping -W 1 -c 10 -f 192.168.241.2 || false # Should not crash kernel
+ 
++ip0 link del wg1
+ ip1 link del wg0
+-ip2 link del wg0
+ 
+ # Test using NAT. We now change the topology to this:
+ # ┌────────────────────────────────────────┐    ┌────────────────────────────────────────────────┐     ┌────────────────────────────────────────┐
+@@ -282,6 +316,20 @@ pp sleep 3
+ n2 ping -W 1 -c 1 192.168.241.1
+ n1 wg set wg0 peer "$pub2" persistent-keepalive 0
+ 
++# Test that onion routing works, even when it loops
++n1 wg set wg0 peer "$pub3" allowed-ips 192.168.242.2/32 endpoint 192.168.241.2:5
++ip1 addr add 192.168.242.1/24 dev wg0
++ip2 link add wg1 type wireguard
++ip2 addr add 192.168.242.2/24 dev wg1
++n2 wg set wg1 private-key <(echo "$key3") listen-port 5 peer "$pub1" allowed-ips 192.168.242.1/32
++ip2 link set wg1 up
++n1 ping -W 1 -c 1 192.168.242.2
++ip2 link del wg1
++n1 wg set wg0 peer "$pub3" endpoint 192.168.242.2:5
++! n1 ping -W 1 -c 1 192.168.242.2 || false # Should not crash kernel
++n1 wg set wg0 peer "$pub3" remove
++ip1 addr del 192.168.242.1/24 dev wg0
++
+ # Do a wg-quick(8)-style policy routing for the default route, making sure vethc has a v6 address to tease out bugs.
+ ip1 -6 addr add fc00::9/96 dev vethc
+ ip1 -6 route add default via fc00::1
+diff --git a/virt/kvm/arm/hyp/aarch32.c b/virt/kvm/arm/hyp/aarch32.c
+index d31f267961e7..25c0e47d57cb 100644
+--- a/virt/kvm/arm/hyp/aarch32.c
++++ b/virt/kvm/arm/hyp/aarch32.c
+@@ -125,12 +125,16 @@ static void __hyp_text kvm_adjust_itstate(struct kvm_vcpu *vcpu)
+  */
+ void __hyp_text kvm_skip_instr32(struct kvm_vcpu *vcpu, bool is_wide_instr)
+ {
++	u32 pc = *vcpu_pc(vcpu);
+ 	bool is_thumb;
+ 
+ 	is_thumb = !!(*vcpu_cpsr(vcpu) & PSR_AA32_T_BIT);
+ 	if (is_thumb && !is_wide_instr)
+-		*vcpu_pc(vcpu) += 2;
++		pc += 2;
+ 	else
+-		*vcpu_pc(vcpu) += 4;
++		pc += 4;
++
++	*vcpu_pc(vcpu) = pc;
++
+ 	kvm_adjust_itstate(vcpu);
+ }
+diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c
+index 97fb2a40e6ba..e7abd05ea896 100644
+--- a/virt/kvm/arm/vgic/vgic-mmio.c
++++ b/virt/kvm/arm/vgic/vgic-mmio.c
+@@ -368,7 +368,7 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq,
+ static void vgic_change_active_prepare(struct kvm_vcpu *vcpu, u32 intid)
+ {
+ 	if (vcpu->kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3 ||
+-	    intid > VGIC_NR_PRIVATE_IRQS)
++	    intid >= VGIC_NR_PRIVATE_IRQS)
+ 		kvm_arm_halt_guest(vcpu->kvm);
+ }
+ 
+@@ -376,7 +376,7 @@ static void vgic_change_active_prepare(struct kvm_vcpu *vcpu, u32 intid)
+ static void vgic_change_active_finish(struct kvm_vcpu *vcpu, u32 intid)
+ {
+ 	if (vcpu->kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3 ||
+-	    intid > VGIC_NR_PRIVATE_IRQS)
++	    intid >= VGIC_NR_PRIVATE_IRQS)
+ 		kvm_arm_resume_guest(vcpu->kvm);
+ }
+ 


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [gentoo-commits] proj/linux-patches:5.6 commit in: /
@ 2020-05-20 11:35 Mike Pagano
  0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2020-05-20 11:35 UTC (permalink / raw
  To: gentoo-commits

commit:     d63cc5104b3bda1cd127301cdf13bc6d8d0c7d9e
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed May 20 11:35:24 2020 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed May 20 11:35:24 2020 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=d63cc510

Linux patch 5.6.14 and removal of redundant patch

Removed x86: Fix early boot crash on gcc-10

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                                |   12 +-
 1013_linux-5.6.14.patch                    | 7349 ++++++++++++++++++++++++++++
 1700_x86-gcc-10-early-boot-crash-fix.patch |  131 -
 3 files changed, 7357 insertions(+), 135 deletions(-)

diff --git a/0000_README b/0000_README
index 6a6ec25..3a37e9d 100644
--- a/0000_README
+++ b/0000_README
@@ -95,6 +95,14 @@ Patch:  1012_linux-5.6.13.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.6.13
 
+Patch:  1013_linux-5.6.14.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.6.14
+
+Patch:  1013_linux-5.6.14.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.6.14
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.
@@ -103,10 +111,6 @@ Patch:  1510_fs-enable-link-security-restrictions-by-default.patch
 From:   http://sources.debian.net/src/linux/3.16.7-ckt4-3/debian/patches/debian/fs-enable-link-security-restrictions-by-default.patch/
 Desc:   Enable link security restrictions by default.
 
-Patch:  1700_x86-gcc-10-early-boot-crash-fix.patch
-From:   https://https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git/patch/?id=f670269a42bfdd2c83a1118cc3d1b475547eac22
-Desc:   x86: Fix early boot crash on gcc-10, 
-
 Patch:  2000_BT-Check-key-sizes-only-if-Secure-Simple-Pairing-enabled.patch
 From:   https://lore.kernel.org/linux-bluetooth/20190522070540.48895-1-marcel@holtmann.org/raw
 Desc:   Bluetooth: Check key sizes only when Secure Simple Pairing is enabled. See bug #686758

diff --git a/1013_linux-5.6.14.patch b/1013_linux-5.6.14.patch
new file mode 100644
index 0000000..da7d2c2
--- /dev/null
+++ b/1013_linux-5.6.14.patch
@@ -0,0 +1,7349 @@
+diff --git a/Documentation/core-api/printk-formats.rst b/Documentation/core-api/printk-formats.rst
+index 8ebe46b1af39..5dfcc4592b23 100644
+--- a/Documentation/core-api/printk-formats.rst
++++ b/Documentation/core-api/printk-formats.rst
+@@ -112,6 +112,20 @@ used when printing stack backtraces. The specifier takes into
+ consideration the effect of compiler optimisations which may occur
+ when tail-calls are used and marked with the noreturn GCC attribute.
+ 
++Probed Pointers from BPF / tracing
++----------------------------------
++
++::
++
++	%pks	kernel string
++	%pus	user string
++
++The ``k`` and ``u`` specifiers are used for printing prior probed memory from
++either kernel memory (k) or user memory (u). The subsequent ``s`` specifier
++results in printing a string. For direct use in regular vsnprintf() the (k)
++and (u) annotation is ignored, however, when used out of BPF's bpf_trace_printk(),
++for example, it reads the memory it is pointing to without faulting.
++
+ Kernel Pointers
+ ---------------
+ 
+diff --git a/Documentation/devicetree/bindings/dma/fsl-edma.txt b/Documentation/devicetree/bindings/dma/fsl-edma.txt
+index e77b08ebcd06..ee1754739b4b 100644
+--- a/Documentation/devicetree/bindings/dma/fsl-edma.txt
++++ b/Documentation/devicetree/bindings/dma/fsl-edma.txt
+@@ -10,7 +10,8 @@ Required properties:
+ - compatible :
+ 	- "fsl,vf610-edma" for eDMA used similar to that on Vybrid vf610 SoC
+ 	- "fsl,imx7ulp-edma" for eDMA2 used similar to that on i.mx7ulp
+-	- "fsl,fsl,ls1028a-edma" for eDMA used similar to that on Vybrid vf610 SoC
++	- "fsl,ls1028a-edma" followed by "fsl,vf610-edma" for eDMA used on the
++	  LS1028A SoC.
+ - reg : Specifies base physical address(s) and size of the eDMA registers.
+ 	The 1st region is eDMA control register's address and size.
+ 	The 2nd and the 3rd regions are programmable channel multiplexing
+diff --git a/Makefile b/Makefile
+index d252219666fd..713f93cceffe 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 6
+-SUBLEVEL = 13
++SUBLEVEL = 14
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+ 
+@@ -708,12 +708,9 @@ else ifdef CONFIG_CC_OPTIMIZE_FOR_SIZE
+ KBUILD_CFLAGS += -Os
+ endif
+ 
+-ifdef CONFIG_CC_DISABLE_WARN_MAYBE_UNINITIALIZED
+-KBUILD_CFLAGS   += -Wno-maybe-uninitialized
+-endif
+-
+ # Tell gcc to never replace conditional load with a non-conditional one
+ KBUILD_CFLAGS	+= $(call cc-option,--param=allow-store-data-races=0)
++KBUILD_CFLAGS	+= $(call cc-option,-fno-allow-store-data-races)
+ 
+ include scripts/Makefile.kcov
+ include scripts/Makefile.gcc-plugins
+@@ -861,6 +858,17 @@ KBUILD_CFLAGS += -Wno-pointer-sign
+ # disable stringop warnings in gcc 8+
+ KBUILD_CFLAGS += $(call cc-disable-warning, stringop-truncation)
+ 
++# We'll want to enable this eventually, but it's not going away for 5.7 at least
++KBUILD_CFLAGS += $(call cc-disable-warning, zero-length-bounds)
++KBUILD_CFLAGS += $(call cc-disable-warning, array-bounds)
++KBUILD_CFLAGS += $(call cc-disable-warning, stringop-overflow)
++
++# Another good warning that we'll want to enable eventually
++KBUILD_CFLAGS += $(call cc-disable-warning, restrict)
++
++# Enabled with W=2, disabled by default as noisy
++KBUILD_CFLAGS += $(call cc-disable-warning, maybe-uninitialized)
++
+ # disable invalid "can't wrap" optimizations for signed / pointers
+ KBUILD_CFLAGS	+= $(call cc-option,-fno-strict-overflow)
+ 
+diff --git a/arch/arm/boot/dts/dra7.dtsi b/arch/arm/boot/dts/dra7.dtsi
+index 5f5ee16f07a3..a341511f014c 100644
+--- a/arch/arm/boot/dts/dra7.dtsi
++++ b/arch/arm/boot/dts/dra7.dtsi
+@@ -172,6 +172,7 @@
+ 			#address-cells = <1>;
+ 			ranges = <0x51000000 0x51000000 0x3000
+ 				  0x0	     0x20000000 0x10000000>;
++			dma-ranges;
+ 			/**
+ 			 * To enable PCI endpoint mode, disable the pcie1_rc
+ 			 * node and enable pcie1_ep mode.
+@@ -185,7 +186,6 @@
+ 				device_type = "pci";
+ 				ranges = <0x81000000 0 0          0x03000 0 0x00010000
+ 					  0x82000000 0 0x20013000 0x13000 0 0xffed000>;
+-				dma-ranges = <0x02000000 0x0 0x00000000 0x00000000 0x1 0x00000000>;
+ 				bus-range = <0x00 0xff>;
+ 				#interrupt-cells = <1>;
+ 				num-lanes = <1>;
+@@ -230,6 +230,7 @@
+ 			#address-cells = <1>;
+ 			ranges = <0x51800000 0x51800000 0x3000
+ 				  0x0	     0x30000000 0x10000000>;
++			dma-ranges;
+ 			status = "disabled";
+ 			pcie2_rc: pcie@51800000 {
+ 				reg = <0x51800000 0x2000>, <0x51802000 0x14c>, <0x1000 0x2000>;
+@@ -240,7 +241,6 @@
+ 				device_type = "pci";
+ 				ranges = <0x81000000 0 0          0x03000 0 0x00010000
+ 					  0x82000000 0 0x30013000 0x13000 0 0xffed000>;
+-				dma-ranges = <0x02000000 0x0 0x00000000 0x00000000 0x1 0x00000000>;
+ 				bus-range = <0x00 0xff>;
+ 				#interrupt-cells = <1>;
+ 				num-lanes = <1>;
+diff --git a/arch/arm/boot/dts/imx27-phytec-phycard-s-rdk.dts b/arch/arm/boot/dts/imx27-phytec-phycard-s-rdk.dts
+index 0cd75dadf292..188639738dc3 100644
+--- a/arch/arm/boot/dts/imx27-phytec-phycard-s-rdk.dts
++++ b/arch/arm/boot/dts/imx27-phytec-phycard-s-rdk.dts
+@@ -75,8 +75,8 @@
+ 	imx27-phycard-s-rdk {
+ 		pinctrl_i2c1: i2c1grp {
+ 			fsl,pins = <
+-				MX27_PAD_I2C2_SDA__I2C2_SDA 0x0
+-				MX27_PAD_I2C2_SCL__I2C2_SCL 0x0
++				MX27_PAD_I2C_DATA__I2C_DATA 0x0
++				MX27_PAD_I2C_CLK__I2C_CLK 0x0
+ 			>;
+ 		};
+ 
+diff --git a/arch/arm/boot/dts/imx6dl-yapp4-ursa.dts b/arch/arm/boot/dts/imx6dl-yapp4-ursa.dts
+index 0d594e4bd559..a1173bf5bff5 100644
+--- a/arch/arm/boot/dts/imx6dl-yapp4-ursa.dts
++++ b/arch/arm/boot/dts/imx6dl-yapp4-ursa.dts
+@@ -38,7 +38,7 @@
+ };
+ 
+ &switch_ports {
+-	/delete-node/ port@2;
++	/delete-node/ port@3;
+ };
+ 
+ &touchscreen {
+diff --git a/arch/arm/boot/dts/r8a73a4.dtsi b/arch/arm/boot/dts/r8a73a4.dtsi
+index a5cd31229fbd..a3ba722a9d7f 100644
+--- a/arch/arm/boot/dts/r8a73a4.dtsi
++++ b/arch/arm/boot/dts/r8a73a4.dtsi
+@@ -131,7 +131,14 @@
+ 	cmt1: timer@e6130000 {
+ 		compatible = "renesas,r8a73a4-cmt1", "renesas,rcar-gen2-cmt1";
+ 		reg = <0 0xe6130000 0 0x1004>;
+-		interrupts = <GIC_SPI 120 IRQ_TYPE_LEVEL_HIGH>;
++		interrupts = <GIC_SPI 120 IRQ_TYPE_LEVEL_HIGH>,
++			     <GIC_SPI 121 IRQ_TYPE_LEVEL_HIGH>,
++			     <GIC_SPI 122 IRQ_TYPE_LEVEL_HIGH>,
++			     <GIC_SPI 123 IRQ_TYPE_LEVEL_HIGH>,
++			     <GIC_SPI 124 IRQ_TYPE_LEVEL_HIGH>,
++			     <GIC_SPI 125 IRQ_TYPE_LEVEL_HIGH>,
++			     <GIC_SPI 126 IRQ_TYPE_LEVEL_HIGH>,
++			     <GIC_SPI 127 IRQ_TYPE_LEVEL_HIGH>;
+ 		clocks = <&mstp3_clks R8A73A4_CLK_CMT1>;
+ 		clock-names = "fck";
+ 		power-domains = <&pd_c5>;
+diff --git a/arch/arm/boot/dts/r8a7740.dtsi b/arch/arm/boot/dts/r8a7740.dtsi
+index ebc1ff64f530..90feb2cf9960 100644
+--- a/arch/arm/boot/dts/r8a7740.dtsi
++++ b/arch/arm/boot/dts/r8a7740.dtsi
+@@ -479,7 +479,7 @@
+ 		cpg_clocks: cpg_clocks@e6150000 {
+ 			compatible = "renesas,r8a7740-cpg-clocks";
+ 			reg = <0xe6150000 0x10000>;
+-			clocks = <&extal1_clk>, <&extalr_clk>;
++			clocks = <&extal1_clk>, <&extal2_clk>, <&extalr_clk>;
+ 			#clock-cells = <1>;
+ 			clock-output-names = "system", "pllc0", "pllc1",
+ 					     "pllc2", "r",
+diff --git a/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi b/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi
+index abe04f4ad7d8..eeaa95baaa10 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-g12-common.dtsi
+@@ -2204,7 +2204,7 @@
+ 				reg = <0x0 0xff400000 0x0 0x40000>;
+ 				interrupts = <GIC_SPI 31 IRQ_TYPE_LEVEL_HIGH>;
+ 				clocks = <&clkc CLKID_USB1_DDR_BRIDGE>;
+-				clock-names = "ddr";
++				clock-names = "otg";
+ 				phys = <&usb2_phy1>;
+ 				phy-names = "usb2-phy";
+ 				dr_mode = "peripheral";
+diff --git a/arch/arm64/boot/dts/amlogic/meson-g12b-khadas-vim3.dtsi b/arch/arm64/boot/dts/amlogic/meson-g12b-khadas-vim3.dtsi
+index 554863429aa6..e2094575f528 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-g12b-khadas-vim3.dtsi
++++ b/arch/arm64/boot/dts/amlogic/meson-g12b-khadas-vim3.dtsi
+@@ -152,6 +152,10 @@
+ 	clock-latency = <50000>;
+ };
+ 
++&frddr_a {
++	status = "okay";
++};
++
+ &frddr_b {
+ 	status = "okay";
+ };
+diff --git a/arch/arm64/boot/dts/amlogic/meson-g12b-ugoos-am6.dts b/arch/arm64/boot/dts/amlogic/meson-g12b-ugoos-am6.dts
+index ccd0bced01e8..2e66d6418a59 100644
+--- a/arch/arm64/boot/dts/amlogic/meson-g12b-ugoos-am6.dts
++++ b/arch/arm64/boot/dts/amlogic/meson-g12b-ugoos-am6.dts
+@@ -545,7 +545,7 @@
+ &usb {
+ 	status = "okay";
+ 	dr_mode = "host";
+-	vbus-regulator = <&usb_pwr_en>;
++	vbus-supply = <&usb_pwr_en>;
+ };
+ 
+ &usb2_phy0 {
+diff --git a/arch/arm64/boot/dts/freescale/imx8mn.dtsi b/arch/arm64/boot/dts/freescale/imx8mn.dtsi
+index a44b5438e842..882e913436ca 100644
+--- a/arch/arm64/boot/dts/freescale/imx8mn.dtsi
++++ b/arch/arm64/boot/dts/freescale/imx8mn.dtsi
+@@ -661,7 +661,7 @@
+ 				reg = <0x30bd0000 0x10000>;
+ 				interrupts = <GIC_SPI 2 IRQ_TYPE_LEVEL_HIGH>;
+ 				clocks = <&clk IMX8MN_CLK_SDMA1_ROOT>,
+-					 <&clk IMX8MN_CLK_SDMA1_ROOT>;
++					 <&clk IMX8MN_CLK_AHB>;
+ 				clock-names = "ipg", "ahb";
+ 				#dma-cells = <3>;
+ 				fsl,sdma-ram-script-name = "imx/sdma/sdma-imx7d.bin";
+diff --git a/arch/arm64/boot/dts/qcom/apq8096-db820c.dtsi b/arch/arm64/boot/dts/qcom/apq8096-db820c.dtsi
+index fff6115f2670..a85b85d85a5f 100644
+--- a/arch/arm64/boot/dts/qcom/apq8096-db820c.dtsi
++++ b/arch/arm64/boot/dts/qcom/apq8096-db820c.dtsi
+@@ -658,8 +658,8 @@
+ 	s11 {
+ 		qcom,saw-leader;
+ 		regulator-always-on;
+-		regulator-min-microvolt = <1230000>;
+-		regulator-max-microvolt = <1230000>;
++		regulator-min-microvolt = <980000>;
++		regulator-max-microvolt = <980000>;
+ 	};
+ };
+ 
+diff --git a/arch/arm64/boot/dts/renesas/r8a77980.dtsi b/arch/arm64/boot/dts/renesas/r8a77980.dtsi
+index b340fb469999..1692bc95129e 100644
+--- a/arch/arm64/boot/dts/renesas/r8a77980.dtsi
++++ b/arch/arm64/boot/dts/renesas/r8a77980.dtsi
+@@ -1318,6 +1318,7 @@
+ 		ipmmu_vip0: mmu@e7b00000 {
+ 			compatible = "renesas,ipmmu-r8a77980";
+ 			reg = <0 0xe7b00000 0 0x1000>;
++			renesas,ipmmu-main = <&ipmmu_mm 4>;
+ 			power-domains = <&sysc R8A77980_PD_ALWAYS_ON>;
+ 			#iommu-cells = <1>;
+ 		};
+@@ -1325,6 +1326,7 @@
+ 		ipmmu_vip1: mmu@e7960000 {
+ 			compatible = "renesas,ipmmu-r8a77980";
+ 			reg = <0 0xe7960000 0 0x1000>;
++			renesas,ipmmu-main = <&ipmmu_mm 11>;
+ 			power-domains = <&sysc R8A77980_PD_ALWAYS_ON>;
+ 			#iommu-cells = <1>;
+ 		};
+diff --git a/arch/arm64/boot/dts/rockchip/rk3328-evb.dts b/arch/arm64/boot/dts/rockchip/rk3328-evb.dts
+index 49c4b96da3d4..6abc6f4a86cf 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3328-evb.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3328-evb.dts
+@@ -92,7 +92,7 @@
+ &i2c1 {
+ 	status = "okay";
+ 
+-	rk805: rk805@18 {
++	rk805: pmic@18 {
+ 		compatible = "rockchip,rk805";
+ 		reg = <0x18>;
+ 		interrupt-parent = <&gpio2>;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3328-rock64.dts b/arch/arm64/boot/dts/rockchip/rk3328-rock64.dts
+index 62936b432f9a..304fad1a0b57 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3328-rock64.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3328-rock64.dts
+@@ -169,7 +169,7 @@
+ &i2c1 {
+ 	status = "okay";
+ 
+-	rk805: rk805@18 {
++	rk805: pmic@18 {
+ 		compatible = "rockchip,rk805";
+ 		reg = <0x18>;
+ 		interrupt-parent = <&gpio2>;
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399.dtsi b/arch/arm64/boot/dts/rockchip/rk3399.dtsi
+index 33cc21fcf4c1..5c4238a80144 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3399.dtsi
+@@ -410,7 +410,7 @@
+ 		reset-names = "usb3-otg";
+ 		status = "disabled";
+ 
+-		usbdrd_dwc3_0: dwc3 {
++		usbdrd_dwc3_0: usb@fe800000 {
+ 			compatible = "snps,dwc3";
+ 			reg = <0x0 0xfe800000 0x0 0x100000>;
+ 			interrupts = <GIC_SPI 105 IRQ_TYPE_LEVEL_HIGH 0>;
+@@ -446,7 +446,7 @@
+ 		reset-names = "usb3-otg";
+ 		status = "disabled";
+ 
+-		usbdrd_dwc3_1: dwc3 {
++		usbdrd_dwc3_1: usb@fe900000 {
+ 			compatible = "snps,dwc3";
+ 			reg = <0x0 0xfe900000 0x0 0x100000>;
+ 			interrupts = <GIC_SPI 110 IRQ_TYPE_LEVEL_HIGH 0>;
+diff --git a/arch/arm64/kernel/machine_kexec.c b/arch/arm64/kernel/machine_kexec.c
+index 8e9c924423b4..a0b144cfaea7 100644
+--- a/arch/arm64/kernel/machine_kexec.c
++++ b/arch/arm64/kernel/machine_kexec.c
+@@ -177,6 +177,7 @@ void machine_kexec(struct kimage *kimage)
+ 	 * the offline CPUs. Therefore, we must use the __* variant here.
+ 	 */
+ 	__flush_icache_range((uintptr_t)reboot_code_buffer,
++			     (uintptr_t)reboot_code_buffer +
+ 			     arm64_relocate_new_kernel_size);
+ 
+ 	/* Flush the kimage list and its buffers. */
+diff --git a/arch/powerpc/include/asm/book3s/32/kup.h b/arch/powerpc/include/asm/book3s/32/kup.h
+index 3c0ba22dc360..db0a1c281587 100644
+--- a/arch/powerpc/include/asm/book3s/32/kup.h
++++ b/arch/powerpc/include/asm/book3s/32/kup.h
+@@ -75,7 +75,7 @@
+ 
+ .macro kuap_check	current, gpr
+ #ifdef CONFIG_PPC_KUAP_DEBUG
+-	lwz	\gpr2, KUAP(thread)
++	lwz	\gpr, KUAP(thread)
+ 999:	twnei	\gpr, 0
+ 	EMIT_BUG_ENTRY 999b, __FILE__, __LINE__, (BUGFLAG_WARNING | BUGFLAG_ONCE)
+ #endif
+diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
+index 2f500debae21..0969285996cb 100644
+--- a/arch/powerpc/include/asm/uaccess.h
++++ b/arch/powerpc/include/asm/uaccess.h
+@@ -166,13 +166,17 @@ do {								\
+ ({								\
+ 	long __pu_err;						\
+ 	__typeof__(*(ptr)) __user *__pu_addr = (ptr);		\
++	__typeof__(*(ptr)) __pu_val = (x);			\
++	__typeof__(size) __pu_size = (size);			\
++								\
+ 	if (!is_kernel_addr((unsigned long)__pu_addr))		\
+ 		might_fault();					\
+-	__chk_user_ptr(ptr);					\
++	__chk_user_ptr(__pu_addr);				\
+ 	if (do_allow)								\
+-		__put_user_size((x), __pu_addr, (size), __pu_err);		\
++		__put_user_size(__pu_val, __pu_addr, __pu_size, __pu_err);	\
+ 	else									\
+-		__put_user_size_allowed((x), __pu_addr, (size), __pu_err);	\
++		__put_user_size_allowed(__pu_val, __pu_addr, __pu_size, __pu_err); \
++								\
+ 	__pu_err;						\
+ })
+ 
+@@ -180,9 +184,13 @@ do {								\
+ ({									\
+ 	long __pu_err = -EFAULT;					\
+ 	__typeof__(*(ptr)) __user *__pu_addr = (ptr);			\
++	__typeof__(*(ptr)) __pu_val = (x);				\
++	__typeof__(size) __pu_size = (size);				\
++									\
+ 	might_fault();							\
+-	if (access_ok(__pu_addr, size))			\
+-		__put_user_size((x), __pu_addr, (size), __pu_err);	\
++	if (access_ok(__pu_addr, __pu_size))				\
++		__put_user_size(__pu_val, __pu_addr, __pu_size, __pu_err); \
++									\
+ 	__pu_err;							\
+ })
+ 
+@@ -190,8 +198,12 @@ do {								\
+ ({								\
+ 	long __pu_err;						\
+ 	__typeof__(*(ptr)) __user *__pu_addr = (ptr);		\
+-	__chk_user_ptr(ptr);					\
+-	__put_user_size((x), __pu_addr, (size), __pu_err);	\
++	__typeof__(*(ptr)) __pu_val = (x);			\
++	__typeof__(size) __pu_size = (size);			\
++								\
++	__chk_user_ptr(__pu_addr);				\
++	__put_user_size(__pu_val, __pu_addr, __pu_size, __pu_err); \
++								\
+ 	__pu_err;						\
+ })
+ 
+@@ -283,15 +295,18 @@ do {								\
+ 	long __gu_err;						\
+ 	__long_type(*(ptr)) __gu_val;				\
+ 	__typeof__(*(ptr)) __user *__gu_addr = (ptr);	\
+-	__chk_user_ptr(ptr);					\
++	__typeof__(size) __gu_size = (size);			\
++								\
++	__chk_user_ptr(__gu_addr);				\
+ 	if (!is_kernel_addr((unsigned long)__gu_addr))		\
+ 		might_fault();					\
+ 	barrier_nospec();					\
+ 	if (do_allow)								\
+-		__get_user_size(__gu_val, __gu_addr, (size), __gu_err);		\
++		__get_user_size(__gu_val, __gu_addr, __gu_size, __gu_err);	\
+ 	else									\
+-		__get_user_size_allowed(__gu_val, __gu_addr, (size), __gu_err);	\
++		__get_user_size_allowed(__gu_val, __gu_addr, __gu_size, __gu_err); \
+ 	(x) = (__typeof__(*(ptr)))__gu_val;			\
++								\
+ 	__gu_err;						\
+ })
+ 
+@@ -300,12 +315,15 @@ do {								\
+ 	long __gu_err = -EFAULT;					\
+ 	__long_type(*(ptr)) __gu_val = 0;				\
+ 	__typeof__(*(ptr)) __user *__gu_addr = (ptr);		\
++	__typeof__(size) __gu_size = (size);				\
++									\
+ 	might_fault();							\
+-	if (access_ok(__gu_addr, (size))) {		\
++	if (access_ok(__gu_addr, __gu_size)) {				\
+ 		barrier_nospec();					\
+-		__get_user_size(__gu_val, __gu_addr, (size), __gu_err);	\
++		__get_user_size(__gu_val, __gu_addr, __gu_size, __gu_err); \
+ 	}								\
+ 	(x) = (__force __typeof__(*(ptr)))__gu_val;				\
++									\
+ 	__gu_err;							\
+ })
+ 
+@@ -314,10 +332,13 @@ do {								\
+ 	long __gu_err;						\
+ 	__long_type(*(ptr)) __gu_val;				\
+ 	__typeof__(*(ptr)) __user *__gu_addr = (ptr);	\
+-	__chk_user_ptr(ptr);					\
++	__typeof__(size) __gu_size = (size);			\
++								\
++	__chk_user_ptr(__gu_addr);				\
+ 	barrier_nospec();					\
+-	__get_user_size(__gu_val, __gu_addr, (size), __gu_err);	\
++	__get_user_size(__gu_val, __gu_addr, __gu_size, __gu_err); \
+ 	(x) = (__force __typeof__(*(ptr)))__gu_val;			\
++								\
+ 	__gu_err;						\
+ })
+ 
+diff --git a/arch/powerpc/kernel/ima_arch.c b/arch/powerpc/kernel/ima_arch.c
+index e34116255ced..957abd592075 100644
+--- a/arch/powerpc/kernel/ima_arch.c
++++ b/arch/powerpc/kernel/ima_arch.c
+@@ -19,12 +19,12 @@ bool arch_ima_get_secureboot(void)
+  * to be stored as an xattr or as an appended signature.
+  *
+  * To avoid duplicate signature verification as much as possible, the IMA
+- * policy rule for module appraisal is added only if CONFIG_MODULE_SIG_FORCE
++ * policy rule for module appraisal is added only if CONFIG_MODULE_SIG
+  * is not enabled.
+  */
+ static const char *const secure_rules[] = {
+ 	"appraise func=KEXEC_KERNEL_CHECK appraise_flag=check_blacklist appraise_type=imasig|modsig",
+-#ifndef CONFIG_MODULE_SIG_FORCE
++#ifndef CONFIG_MODULE_SIG
+ 	"appraise func=MODULE_CHECK appraise_flag=check_blacklist appraise_type=imasig|modsig",
+ #endif
+ 	NULL
+@@ -50,7 +50,7 @@ static const char *const secure_and_trusted_rules[] = {
+ 	"measure func=KEXEC_KERNEL_CHECK template=ima-modsig",
+ 	"measure func=MODULE_CHECK template=ima-modsig",
+ 	"appraise func=KEXEC_KERNEL_CHECK appraise_flag=check_blacklist appraise_type=imasig|modsig",
+-#ifndef CONFIG_MODULE_SIG_FORCE
++#ifndef CONFIG_MODULE_SIG
+ 	"appraise func=MODULE_CHECK appraise_flag=check_blacklist appraise_type=imasig|modsig",
+ #endif
+ 	NULL
+diff --git a/arch/powerpc/kernel/vdso32/gettimeofday.S b/arch/powerpc/kernel/vdso32/gettimeofday.S
+index a3951567118a..e7f8f9f1b3f4 100644
+--- a/arch/powerpc/kernel/vdso32/gettimeofday.S
++++ b/arch/powerpc/kernel/vdso32/gettimeofday.S
+@@ -218,11 +218,11 @@ V_FUNCTION_BEGIN(__kernel_clock_getres)
+ 	blr
+ 
+ 	/*
+-	 * invalid clock
++	 * syscall fallback
+ 	 */
+ 99:
+-	li	r3, EINVAL
+-	crset	so
++	li	r0,__NR_clock_getres
++	sc
+ 	blr
+   .cfi_endproc
+ V_FUNCTION_END(__kernel_clock_getres)
+diff --git a/arch/riscv/include/asm/perf_event.h b/arch/riscv/include/asm/perf_event.h
+index 0234048b12bc..062efd3a1d5d 100644
+--- a/arch/riscv/include/asm/perf_event.h
++++ b/arch/riscv/include/asm/perf_event.h
+@@ -12,19 +12,14 @@
+ #include <linux/ptrace.h>
+ #include <linux/interrupt.h>
+ 
++#ifdef CONFIG_RISCV_BASE_PMU
+ #define RISCV_BASE_COUNTERS	2
+ 
+ /*
+  * The RISCV_MAX_COUNTERS parameter should be specified.
+  */
+ 
+-#ifdef CONFIG_RISCV_BASE_PMU
+ #define RISCV_MAX_COUNTERS	2
+-#endif
+-
+-#ifndef RISCV_MAX_COUNTERS
+-#error "Please provide a valid RISCV_MAX_COUNTERS for the PMU."
+-#endif
+ 
+ /*
+  * These are the indexes of bits in counteren register *minus* 1,
+@@ -82,6 +77,7 @@ struct riscv_pmu {
+ 	int		irq;
+ };
+ 
++#endif
+ #ifdef CONFIG_PERF_EVENTS
+ #define perf_arch_bpf_user_pt_regs(regs) (struct user_regs_struct *)regs
+ #endif
+diff --git a/arch/riscv/kernel/Makefile b/arch/riscv/kernel/Makefile
+index f40205cb9a22..1dcc095dc23c 100644
+--- a/arch/riscv/kernel/Makefile
++++ b/arch/riscv/kernel/Makefile
+@@ -38,7 +38,7 @@ obj-$(CONFIG_MODULE_SECTIONS)	+= module-sections.o
+ obj-$(CONFIG_FUNCTION_TRACER)	+= mcount.o ftrace.o
+ obj-$(CONFIG_DYNAMIC_FTRACE)	+= mcount-dyn.o
+ 
+-obj-$(CONFIG_PERF_EVENTS)	+= perf_event.o
++obj-$(CONFIG_RISCV_BASE_PMU)	+= perf_event.o
+ obj-$(CONFIG_PERF_EVENTS)	+= perf_callchain.o
+ obj-$(CONFIG_HAVE_PERF_REGS)	+= perf_regs.o
+ obj-$(CONFIG_RISCV_SBI)		+= sbi.o
+diff --git a/arch/riscv/kernel/vdso/Makefile b/arch/riscv/kernel/vdso/Makefile
+index 33b16f4212f7..a4ee3a0e7d20 100644
+--- a/arch/riscv/kernel/vdso/Makefile
++++ b/arch/riscv/kernel/vdso/Makefile
+@@ -33,15 +33,15 @@ $(obj)/vdso.so.dbg: $(src)/vdso.lds $(obj-vdso) FORCE
+ 	$(call if_changed,vdsold)
+ 
+ # We also create a special relocatable object that should mirror the symbol
+-# table and layout of the linked DSO.  With ld -R we can then refer to
+-# these symbols in the kernel code rather than hand-coded addresses.
++# table and layout of the linked DSO. With ld --just-symbols we can then
++# refer to these symbols in the kernel code rather than hand-coded addresses.
+ 
+ SYSCFLAGS_vdso.so.dbg = -shared -s -Wl,-soname=linux-vdso.so.1 \
+ 	-Wl,--build-id -Wl,--hash-style=both
+ $(obj)/vdso-dummy.o: $(src)/vdso.lds $(obj)/rt_sigreturn.o FORCE
+ 	$(call if_changed,vdsold)
+ 
+-LDFLAGS_vdso-syms.o := -r -R
++LDFLAGS_vdso-syms.o := -r --just-symbols
+ $(obj)/vdso-syms.o: $(obj)/vdso-dummy.o FORCE
+ 	$(call if_changed,ld)
+ 
+diff --git a/arch/x86/include/asm/ftrace.h b/arch/x86/include/asm/ftrace.h
+index 85be2f506272..89af0d2c62aa 100644
+--- a/arch/x86/include/asm/ftrace.h
++++ b/arch/x86/include/asm/ftrace.h
+@@ -56,6 +56,12 @@ struct dyn_arch_ftrace {
+ 
+ #ifndef __ASSEMBLY__
+ 
++#if defined(CONFIG_FUNCTION_TRACER) && defined(CONFIG_DYNAMIC_FTRACE)
++extern void set_ftrace_ops_ro(void);
++#else
++static inline void set_ftrace_ops_ro(void) { }
++#endif
++
+ #define ARCH_HAS_SYSCALL_MATCH_SYM_NAME
+ static inline bool arch_syscall_match_sym_name(const char *sym, const char *name)
+ {
+diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
+index 7ba99c0759cf..c121b8f24597 100644
+--- a/arch/x86/include/asm/kvm_host.h
++++ b/arch/x86/include/asm/kvm_host.h
+@@ -574,6 +574,7 @@ struct kvm_vcpu_arch {
+ 	unsigned long cr4;
+ 	unsigned long cr4_guest_owned_bits;
+ 	unsigned long cr8;
++	u32 host_pkru;
+ 	u32 pkru;
+ 	u32 hflags;
+ 	u64 efer;
+diff --git a/arch/x86/include/asm/stackprotector.h b/arch/x86/include/asm/stackprotector.h
+index 91e29b6a86a5..9804a7957f4e 100644
+--- a/arch/x86/include/asm/stackprotector.h
++++ b/arch/x86/include/asm/stackprotector.h
+@@ -55,8 +55,13 @@
+ /*
+  * Initialize the stackprotector canary value.
+  *
+- * NOTE: this must only be called from functions that never return,
++ * NOTE: this must only be called from functions that never return
+  * and it must always be inlined.
++ *
++ * In addition, it should be called from a compilation unit for which
++ * stack protector is disabled. Alternatively, the caller should not end
++ * with a function call which gets tail-call optimized as that would
++ * lead to checking a modified canary value.
+  */
+ static __always_inline void boot_init_stack_canary(void)
+ {
+diff --git a/arch/x86/kernel/ftrace.c b/arch/x86/kernel/ftrace.c
+index 37a0aeaf89e7..b0e641793be4 100644
+--- a/arch/x86/kernel/ftrace.c
++++ b/arch/x86/kernel/ftrace.c
+@@ -407,7 +407,8 @@ create_trampoline(struct ftrace_ops *ops, unsigned int *tramp_size)
+ 
+ 	set_vm_flush_reset_perms(trampoline);
+ 
+-	set_memory_ro((unsigned long)trampoline, npages);
++	if (likely(system_state != SYSTEM_BOOTING))
++		set_memory_ro((unsigned long)trampoline, npages);
+ 	set_memory_x((unsigned long)trampoline, npages);
+ 	return (unsigned long)trampoline;
+ fail:
+@@ -415,6 +416,32 @@ fail:
+ 	return 0;
+ }
+ 
++void set_ftrace_ops_ro(void)
++{
++	struct ftrace_ops *ops;
++	unsigned long start_offset;
++	unsigned long end_offset;
++	unsigned long npages;
++	unsigned long size;
++
++	do_for_each_ftrace_op(ops, ftrace_ops_list) {
++		if (!(ops->flags & FTRACE_OPS_FL_ALLOC_TRAMP))
++			continue;
++
++		if (ops->flags & FTRACE_OPS_FL_SAVE_REGS) {
++			start_offset = (unsigned long)ftrace_regs_caller;
++			end_offset = (unsigned long)ftrace_regs_caller_end;
++		} else {
++			start_offset = (unsigned long)ftrace_caller;
++			end_offset = (unsigned long)ftrace_epilogue;
++		}
++		size = end_offset - start_offset;
++		size = size + RET_SIZE + sizeof(void *);
++		npages = DIV_ROUND_UP(size, PAGE_SIZE);
++		set_memory_ro((unsigned long)ops->trampoline, npages);
++	} while_for_each_ftrace_op(ops);
++}
++
+ static unsigned long calc_trampoline_call_offset(bool save_regs)
+ {
+ 	unsigned long start_offset;
+diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
+index 69881b2d446c..9674321ce3a3 100644
+--- a/arch/x86/kernel/smpboot.c
++++ b/arch/x86/kernel/smpboot.c
+@@ -262,6 +262,14 @@ static void notrace start_secondary(void *unused)
+ 
+ 	wmb();
+ 	cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
++
++	/*
++	 * Prevent tail call to cpu_startup_entry() because the stack protector
++	 * guard has been changed a couple of function calls up, in
++	 * boot_init_stack_canary() and must not be checked before tail calling
++	 * another function.
++	 */
++	prevent_tail_call_optimization();
+ }
+ 
+ /**
+diff --git a/arch/x86/kernel/unwind_orc.c b/arch/x86/kernel/unwind_orc.c
+index 80537dcbddef..9414f02a55ea 100644
+--- a/arch/x86/kernel/unwind_orc.c
++++ b/arch/x86/kernel/unwind_orc.c
+@@ -611,23 +611,23 @@ EXPORT_SYMBOL_GPL(unwind_next_frame);
+ void __unwind_start(struct unwind_state *state, struct task_struct *task,
+ 		    struct pt_regs *regs, unsigned long *first_frame)
+ {
+-	if (!orc_init)
+-		goto done;
+-
+ 	memset(state, 0, sizeof(*state));
+ 	state->task = task;
+ 
++	if (!orc_init)
++		goto err;
++
+ 	/*
+ 	 * Refuse to unwind the stack of a task while it's executing on another
+ 	 * CPU.  This check is racy, but that's ok: the unwinder has other
+ 	 * checks to prevent it from going off the rails.
+ 	 */
+ 	if (task_on_another_cpu(task))
+-		goto done;
++		goto err;
+ 
+ 	if (regs) {
+ 		if (user_mode(regs))
+-			goto done;
++			goto the_end;
+ 
+ 		state->ip = regs->ip;
+ 		state->sp = regs->sp;
+@@ -660,6 +660,7 @@ void __unwind_start(struct unwind_state *state, struct task_struct *task,
+ 		 * generate some kind of backtrace if this happens.
+ 		 */
+ 		void *next_page = (void *)PAGE_ALIGN((unsigned long)state->sp);
++		state->error = true;
+ 		if (get_stack_info(next_page, state->task, &state->stack_info,
+ 				   &state->stack_mask))
+ 			return;
+@@ -685,8 +686,9 @@ void __unwind_start(struct unwind_state *state, struct task_struct *task,
+ 
+ 	return;
+ 
+-done:
++err:
++	state->error = true;
++the_end:
+ 	state->stack_info.type = STACK_TYPE_UNKNOWN;
+-	return;
+ }
+ EXPORT_SYMBOL_GPL(__unwind_start);
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index eec7b2d93104..3a2f05ef51fa 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -5504,6 +5504,23 @@ static bool nested_vmx_exit_handled_vmcs_access(struct kvm_vcpu *vcpu,
+ 	return 1 & (b >> (field & 7));
+ }
+ 
++static bool nested_vmx_exit_handled_mtf(struct vmcs12 *vmcs12)
++{
++	u32 entry_intr_info = vmcs12->vm_entry_intr_info_field;
++
++	if (nested_cpu_has_mtf(vmcs12))
++		return true;
++
++	/*
++	 * An MTF VM-exit may be injected into the guest by setting the
++	 * interruption-type to 7 (other event) and the vector field to 0. Such
++	 * is the case regardless of the 'monitor trap flag' VM-execution
++	 * control.
++	 */
++	return entry_intr_info == (INTR_INFO_VALID_MASK
++				   | INTR_TYPE_OTHER_EVENT);
++}
++
+ /*
+  * Return 1 if we should exit from L2 to L1 to handle an exit, or 0 if we
+  * should handle it ourselves in L0 (and then continue L2). Only call this
+@@ -5618,7 +5635,7 @@ bool nested_vmx_exit_reflected(struct kvm_vcpu *vcpu, u32 exit_reason)
+ 	case EXIT_REASON_MWAIT_INSTRUCTION:
+ 		return nested_cpu_has(vmcs12, CPU_BASED_MWAIT_EXITING);
+ 	case EXIT_REASON_MONITOR_TRAP_FLAG:
+-		return nested_cpu_has(vmcs12, CPU_BASED_MONITOR_TRAP_FLAG);
++		return nested_vmx_exit_handled_mtf(vmcs12);
+ 	case EXIT_REASON_MONITOR_INSTRUCTION:
+ 		return nested_cpu_has(vmcs12, CPU_BASED_MONITOR_EXITING);
+ 	case EXIT_REASON_PAUSE_INSTRUCTION:
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index c1ffe7d24f83..a83c94a971ee 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -1380,7 +1380,6 @@ void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
+ 
+ 	vmx_vcpu_pi_load(vcpu, cpu);
+ 
+-	vmx->host_pkru = read_pkru();
+ 	vmx->host_debugctlmsr = get_debugctlmsr();
+ }
+ 
+@@ -6538,11 +6537,6 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu)
+ 
+ 	kvm_load_guest_xsave_state(vcpu);
+ 
+-	if (static_cpu_has(X86_FEATURE_PKU) &&
+-	    kvm_read_cr4_bits(vcpu, X86_CR4_PKE) &&
+-	    vcpu->arch.pkru != vmx->host_pkru)
+-		__write_pkru(vcpu->arch.pkru);
+-
+ 	pt_guest_enter(vmx);
+ 
+ 	atomic_switch_perf_msrs(vmx);
+@@ -6631,18 +6625,6 @@ static void vmx_vcpu_run(struct kvm_vcpu *vcpu)
+ 
+ 	pt_guest_exit(vmx);
+ 
+-	/*
+-	 * eager fpu is enabled if PKEY is supported and CR4 is switched
+-	 * back on host, so it is safe to read guest PKRU from current
+-	 * XSAVE.
+-	 */
+-	if (static_cpu_has(X86_FEATURE_PKU) &&
+-	    kvm_read_cr4_bits(vcpu, X86_CR4_PKE)) {
+-		vcpu->arch.pkru = rdpkru();
+-		if (vcpu->arch.pkru != vmx->host_pkru)
+-			__write_pkru(vmx->host_pkru);
+-	}
+-
+ 	kvm_load_host_xsave_state(vcpu);
+ 
+ 	vmx->nested.nested_run_pending = 0;
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 17650bda4331..7f3371a39ed0 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -809,11 +809,25 @@ void kvm_load_guest_xsave_state(struct kvm_vcpu *vcpu)
+ 		    vcpu->arch.ia32_xss != host_xss)
+ 			wrmsrl(MSR_IA32_XSS, vcpu->arch.ia32_xss);
+ 	}
++
++	if (static_cpu_has(X86_FEATURE_PKU) &&
++	    (kvm_read_cr4_bits(vcpu, X86_CR4_PKE) ||
++	     (vcpu->arch.xcr0 & XFEATURE_MASK_PKRU)) &&
++	    vcpu->arch.pkru != vcpu->arch.host_pkru)
++		__write_pkru(vcpu->arch.pkru);
+ }
+ EXPORT_SYMBOL_GPL(kvm_load_guest_xsave_state);
+ 
+ void kvm_load_host_xsave_state(struct kvm_vcpu *vcpu)
+ {
++	if (static_cpu_has(X86_FEATURE_PKU) &&
++	    (kvm_read_cr4_bits(vcpu, X86_CR4_PKE) ||
++	     (vcpu->arch.xcr0 & XFEATURE_MASK_PKRU))) {
++		vcpu->arch.pkru = rdpkru();
++		if (vcpu->arch.pkru != vcpu->arch.host_pkru)
++			__write_pkru(vcpu->arch.host_pkru);
++	}
++
+ 	if (kvm_read_cr4_bits(vcpu, X86_CR4_OSXSAVE)) {
+ 
+ 		if (vcpu->arch.xcr0 != host_xcr0)
+@@ -3529,6 +3543,9 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
+ 
+ 	kvm_x86_ops->vcpu_load(vcpu, cpu);
+ 
++	/* Save host pkru register if supported */
++	vcpu->arch.host_pkru = read_pkru();
++
+ 	/* Apply any externally detected TSC adjustments (due to suspend) */
+ 	if (unlikely(vcpu->arch.tsc_offset_adjustment)) {
+ 		adjust_tsc_offset_host(vcpu, vcpu->arch.tsc_offset_adjustment);
+@@ -3722,7 +3739,7 @@ static int kvm_vcpu_ioctl_x86_setup_mce(struct kvm_vcpu *vcpu,
+ 	unsigned bank_num = mcg_cap & 0xff, bank;
+ 
+ 	r = -EINVAL;
+-	if (!bank_num || bank_num >= KVM_MAX_MCE_BANKS)
++	if (!bank_num || bank_num > KVM_MAX_MCE_BANKS)
+ 		goto out;
+ 	if (mcg_cap & ~(kvm_mce_cap_supported | 0xff | 0xff0000))
+ 		goto out;
+diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
+index abbdecb75fad..023e1ec5e153 100644
+--- a/arch/x86/mm/init_64.c
++++ b/arch/x86/mm/init_64.c
+@@ -54,6 +54,7 @@
+ #include <asm/init.h>
+ #include <asm/uv/uv.h>
+ #include <asm/setup.h>
++#include <asm/ftrace.h>
+ 
+ #include "mm_internal.h"
+ 
+@@ -1288,6 +1289,8 @@ void mark_rodata_ro(void)
+ 	all_end = roundup((unsigned long)_brk_end, PMD_SIZE);
+ 	set_memory_nx(text_end, (all_end - text_end) >> PAGE_SHIFT);
+ 
++	set_ftrace_ops_ro();
++
+ #ifdef CONFIG_CPA_DEBUG
+ 	printk(KERN_INFO "Testing CPA: undo %lx-%lx\n", start, end);
+ 	set_memory_rw(start, (end-start) >> PAGE_SHIFT);
+diff --git a/arch/x86/xen/smp_pv.c b/arch/x86/xen/smp_pv.c
+index 802ee5bba66c..0cebe5db691d 100644
+--- a/arch/x86/xen/smp_pv.c
++++ b/arch/x86/xen/smp_pv.c
+@@ -92,6 +92,7 @@ asmlinkage __visible void cpu_bringup_and_idle(void)
+ 	cpu_bringup();
+ 	boot_init_stack_canary();
+ 	cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
++	prevent_tail_call_optimization();
+ }
+ 
+ void xen_smp_intr_free_pv(unsigned int cpu)
+diff --git a/crypto/lrw.c b/crypto/lrw.c
+index 63c485c0d8a6..9b20fc4b2efb 100644
+--- a/crypto/lrw.c
++++ b/crypto/lrw.c
+@@ -287,7 +287,7 @@ static void exit_tfm(struct crypto_skcipher *tfm)
+ 	crypto_free_skcipher(ctx->child);
+ }
+ 
+-static void free(struct skcipher_instance *inst)
++static void free_inst(struct skcipher_instance *inst)
+ {
+ 	crypto_drop_skcipher(skcipher_instance_ctx(inst));
+ 	kfree(inst);
+@@ -400,7 +400,7 @@ static int create(struct crypto_template *tmpl, struct rtattr **tb)
+ 	inst->alg.encrypt = encrypt;
+ 	inst->alg.decrypt = decrypt;
+ 
+-	inst->free = free;
++	inst->free = free_inst;
+ 
+ 	err = skcipher_register_instance(tmpl, inst);
+ 	if (err)
+diff --git a/crypto/xts.c b/crypto/xts.c
+index 29efa15f1495..983dae2bb2db 100644
+--- a/crypto/xts.c
++++ b/crypto/xts.c
+@@ -322,7 +322,7 @@ static void exit_tfm(struct crypto_skcipher *tfm)
+ 	crypto_free_cipher(ctx->tweak);
+ }
+ 
+-static void free(struct skcipher_instance *inst)
++static void free_inst(struct skcipher_instance *inst)
+ {
+ 	crypto_drop_skcipher(skcipher_instance_ctx(inst));
+ 	kfree(inst);
+@@ -434,7 +434,7 @@ static int create(struct crypto_template *tmpl, struct rtattr **tb)
+ 	inst->alg.encrypt = encrypt;
+ 	inst->alg.decrypt = decrypt;
+ 
+-	inst->free = free;
++	inst->free = free_inst;
+ 
+ 	err = skcipher_register_instance(tmpl, inst);
+ 	if (err)
+diff --git a/drivers/acpi/ec.c b/drivers/acpi/ec.c
+index 35dd2f1fb0e6..03b3067811c9 100644
+--- a/drivers/acpi/ec.c
++++ b/drivers/acpi/ec.c
+@@ -2042,23 +2042,31 @@ void acpi_ec_set_gpe_wake_mask(u8 action)
+ 		acpi_set_gpe_wake_mask(NULL, first_ec->gpe, action);
+ }
+ 
+-bool acpi_ec_other_gpes_active(void)
+-{
+-	return acpi_any_gpe_status_set(first_ec ? first_ec->gpe : U32_MAX);
+-}
+-
+ bool acpi_ec_dispatch_gpe(void)
+ {
+ 	u32 ret;
+ 
+ 	if (!first_ec)
++		return acpi_any_gpe_status_set(U32_MAX);
++
++	/*
++	 * Report wakeup if the status bit is set for any enabled GPE other
++	 * than the EC one.
++	 */
++	if (acpi_any_gpe_status_set(first_ec->gpe))
++		return true;
++
++	if (ec_no_wakeup)
+ 		return false;
+ 
++	/*
++	 * Dispatch the EC GPE in-band, but do not report wakeup in any case
++	 * to allow the caller to process events properly after that.
++	 */
+ 	ret = acpi_dispatch_gpe(NULL, first_ec->gpe);
+-	if (ret == ACPI_INTERRUPT_HANDLED) {
++	if (ret == ACPI_INTERRUPT_HANDLED)
+ 		pm_pr_dbg("EC GPE dispatched\n");
+-		return true;
+-	}
++
+ 	return false;
+ }
+ #endif /* CONFIG_PM_SLEEP */
+diff --git a/drivers/acpi/internal.h b/drivers/acpi/internal.h
+index d44c591c4ee4..3616daec650b 100644
+--- a/drivers/acpi/internal.h
++++ b/drivers/acpi/internal.h
+@@ -202,7 +202,6 @@ void acpi_ec_remove_query_handler(struct acpi_ec *ec, u8 query_bit);
+ 
+ #ifdef CONFIG_PM_SLEEP
+ void acpi_ec_flush_work(void);
+-bool acpi_ec_other_gpes_active(void);
+ bool acpi_ec_dispatch_gpe(void);
+ #endif
+ 
+diff --git a/drivers/acpi/sleep.c b/drivers/acpi/sleep.c
+index 4edc8a3ce40f..3850704570c0 100644
+--- a/drivers/acpi/sleep.c
++++ b/drivers/acpi/sleep.c
+@@ -1013,20 +1013,10 @@ static bool acpi_s2idle_wake(void)
+ 		if (acpi_check_wakeup_handlers())
+ 			return true;
+ 
+-		/*
+-		 * If the status bit is set for any enabled GPE other than the
+-		 * EC one, the wakeup is regarded as a genuine one.
+-		 */
+-		if (acpi_ec_other_gpes_active())
++		/* Check non-EC GPE wakeups and dispatch the EC GPE. */
++		if (acpi_ec_dispatch_gpe())
+ 			return true;
+ 
+-		/*
+-		 * If the EC GPE status bit has not been set, the wakeup is
+-		 * regarded as a spurious one.
+-		 */
+-		if (!acpi_ec_dispatch_gpe())
+-			return false;
+-
+ 		/*
+ 		 * Cancel the wakeup and process all pending events in case
+ 		 * there are any wakeup ones in there.
+diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
+index 0736248999b0..d52f33881ab6 100644
+--- a/drivers/block/virtio_blk.c
++++ b/drivers/block/virtio_blk.c
+@@ -32,6 +32,15 @@ struct virtio_blk_vq {
+ } ____cacheline_aligned_in_smp;
+ 
+ struct virtio_blk {
++	/*
++	 * This mutex must be held by anything that may run after
++	 * virtblk_remove() sets vblk->vdev to NULL.
++	 *
++	 * blk-mq, virtqueue processing, and sysfs attribute code paths are
++	 * shut down before vblk->vdev is set to NULL and therefore do not need
++	 * to hold this mutex.
++	 */
++	struct mutex vdev_mutex;
+ 	struct virtio_device *vdev;
+ 
+ 	/* The disk structure for the kernel. */
+@@ -43,6 +52,13 @@ struct virtio_blk {
+ 	/* Process context for config space updates */
+ 	struct work_struct config_work;
+ 
++	/*
++	 * Tracks references from block_device_operations open/release and
++	 * virtio_driver probe/remove so this object can be freed once no
++	 * longer in use.
++	 */
++	refcount_t refs;
++
+ 	/* What host tells us, plus 2 for header & tailer. */
+ 	unsigned int sg_elems;
+ 
+@@ -294,10 +310,55 @@ out:
+ 	return err;
+ }
+ 
++static void virtblk_get(struct virtio_blk *vblk)
++{
++	refcount_inc(&vblk->refs);
++}
++
++static void virtblk_put(struct virtio_blk *vblk)
++{
++	if (refcount_dec_and_test(&vblk->refs)) {
++		ida_simple_remove(&vd_index_ida, vblk->index);
++		mutex_destroy(&vblk->vdev_mutex);
++		kfree(vblk);
++	}
++}
++
++static int virtblk_open(struct block_device *bd, fmode_t mode)
++{
++	struct virtio_blk *vblk = bd->bd_disk->private_data;
++	int ret = 0;
++
++	mutex_lock(&vblk->vdev_mutex);
++
++	if (vblk->vdev)
++		virtblk_get(vblk);
++	else
++		ret = -ENXIO;
++
++	mutex_unlock(&vblk->vdev_mutex);
++	return ret;
++}
++
++static void virtblk_release(struct gendisk *disk, fmode_t mode)
++{
++	struct virtio_blk *vblk = disk->private_data;
++
++	virtblk_put(vblk);
++}
++
+ /* We provide getgeo only to please some old bootloader/partitioning tools */
+ static int virtblk_getgeo(struct block_device *bd, struct hd_geometry *geo)
+ {
+ 	struct virtio_blk *vblk = bd->bd_disk->private_data;
++	int ret = 0;
++
++	mutex_lock(&vblk->vdev_mutex);
++
++	if (!vblk->vdev) {
++		ret = -ENXIO;
++		goto out;
++	}
+ 
+ 	/* see if the host passed in geometry config */
+ 	if (virtio_has_feature(vblk->vdev, VIRTIO_BLK_F_GEOMETRY)) {
+@@ -313,11 +374,15 @@ static int virtblk_getgeo(struct block_device *bd, struct hd_geometry *geo)
+ 		geo->sectors = 1 << 5;
+ 		geo->cylinders = get_capacity(bd->bd_disk) >> 11;
+ 	}
+-	return 0;
++out:
++	mutex_unlock(&vblk->vdev_mutex);
++	return ret;
+ }
+ 
+ static const struct block_device_operations virtblk_fops = {
+ 	.owner  = THIS_MODULE,
++	.open = virtblk_open,
++	.release = virtblk_release,
+ 	.getgeo = virtblk_getgeo,
+ };
+ 
+@@ -657,6 +722,10 @@ static int virtblk_probe(struct virtio_device *vdev)
+ 		goto out_free_index;
+ 	}
+ 
++	/* This reference is dropped in virtblk_remove(). */
++	refcount_set(&vblk->refs, 1);
++	mutex_init(&vblk->vdev_mutex);
++
+ 	vblk->vdev = vdev;
+ 	vblk->sg_elems = sg_elems;
+ 
+@@ -822,8 +891,6 @@ out:
+ static void virtblk_remove(struct virtio_device *vdev)
+ {
+ 	struct virtio_blk *vblk = vdev->priv;
+-	int index = vblk->index;
+-	int refc;
+ 
+ 	/* Make sure no work handler is accessing the device. */
+ 	flush_work(&vblk->config_work);
+@@ -833,18 +900,21 @@ static void virtblk_remove(struct virtio_device *vdev)
+ 
+ 	blk_mq_free_tag_set(&vblk->tag_set);
+ 
++	mutex_lock(&vblk->vdev_mutex);
++
+ 	/* Stop all the virtqueues. */
+ 	vdev->config->reset(vdev);
+ 
+-	refc = kref_read(&disk_to_dev(vblk->disk)->kobj.kref);
++	/* Virtqueues are stopped, nothing can use vblk->vdev anymore. */
++	vblk->vdev = NULL;
++
+ 	put_disk(vblk->disk);
+ 	vdev->config->del_vqs(vdev);
+ 	kfree(vblk->vqs);
+-	kfree(vblk);
+ 
+-	/* Only free device id if we don't have any users */
+-	if (refc == 1)
+-		ida_simple_remove(&vd_index_ida, index);
++	mutex_unlock(&vblk->vdev_mutex);
++
++	virtblk_put(vblk);
+ }
+ 
+ #ifdef CONFIG_PM_SLEEP
+diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
+index 305544b68b8a..f22b7aed6e64 100644
+--- a/drivers/clk/clk.c
++++ b/drivers/clk/clk.c
+@@ -3512,6 +3512,9 @@ static int __clk_core_init(struct clk_core *core)
+ out:
+ 	clk_pm_runtime_put(core);
+ unlock:
++	if (ret)
++		hlist_del_init(&core->child_node);
++
+ 	clk_prepare_unlock();
+ 
+ 	if (!ret)
+diff --git a/drivers/clk/rockchip/clk-rk3228.c b/drivers/clk/rockchip/clk-rk3228.c
+index d17cfb7a3ff4..d7243c09cc84 100644
+--- a/drivers/clk/rockchip/clk-rk3228.c
++++ b/drivers/clk/rockchip/clk-rk3228.c
+@@ -156,8 +156,6 @@ PNAME(mux_i2s_out_p)		= { "i2s1_pre", "xin12m" };
+ PNAME(mux_i2s2_p)		= { "i2s2_src", "i2s2_frac", "xin12m" };
+ PNAME(mux_sclk_spdif_p)		= { "sclk_spdif_src", "spdif_frac", "xin12m" };
+ 
+-PNAME(mux_aclk_gpu_pre_p)	= { "cpll_gpu", "gpll_gpu", "hdmiphy_gpu", "usb480m_gpu" };
+-
+ PNAME(mux_uart0_p)		= { "uart0_src", "uart0_frac", "xin24m" };
+ PNAME(mux_uart1_p)		= { "uart1_src", "uart1_frac", "xin24m" };
+ PNAME(mux_uart2_p)		= { "uart2_src", "uart2_frac", "xin24m" };
+@@ -468,16 +466,9 @@ static struct rockchip_clk_branch rk3228_clk_branches[] __initdata = {
+ 			RK2928_CLKSEL_CON(24), 6, 10, DFLAGS,
+ 			RK2928_CLKGATE_CON(2), 8, GFLAGS),
+ 
+-	GATE(0, "cpll_gpu", "cpll", 0,
+-			RK2928_CLKGATE_CON(3), 13, GFLAGS),
+-	GATE(0, "gpll_gpu", "gpll", 0,
+-			RK2928_CLKGATE_CON(3), 13, GFLAGS),
+-	GATE(0, "hdmiphy_gpu", "hdmiphy", 0,
+-			RK2928_CLKGATE_CON(3), 13, GFLAGS),
+-	GATE(0, "usb480m_gpu", "usb480m", 0,
++	COMPOSITE(0, "aclk_gpu_pre", mux_pll_src_4plls_p, 0,
++			RK2928_CLKSEL_CON(34), 5, 2, MFLAGS, 0, 5, DFLAGS,
+ 			RK2928_CLKGATE_CON(3), 13, GFLAGS),
+-	COMPOSITE_NOGATE(0, "aclk_gpu_pre", mux_aclk_gpu_pre_p, 0,
+-			RK2928_CLKSEL_CON(34), 5, 2, MFLAGS, 0, 5, DFLAGS),
+ 
+ 	COMPOSITE(SCLK_SPI0, "sclk_spi0", mux_pll_src_2plls_p, 0,
+ 			RK2928_CLKSEL_CON(25), 8, 1, MFLAGS, 0, 7, DFLAGS,
+@@ -582,8 +573,8 @@ static struct rockchip_clk_branch rk3228_clk_branches[] __initdata = {
+ 	GATE(0, "pclk_peri_noc", "pclk_peri", CLK_IGNORE_UNUSED, RK2928_CLKGATE_CON(12), 2, GFLAGS),
+ 
+ 	/* PD_GPU */
+-	GATE(ACLK_GPU, "aclk_gpu", "aclk_gpu_pre", 0, RK2928_CLKGATE_CON(13), 14, GFLAGS),
+-	GATE(0, "aclk_gpu_noc", "aclk_gpu_pre", 0, RK2928_CLKGATE_CON(13), 15, GFLAGS),
++	GATE(ACLK_GPU, "aclk_gpu", "aclk_gpu_pre", 0, RK2928_CLKGATE_CON(7), 14, GFLAGS),
++	GATE(0, "aclk_gpu_noc", "aclk_gpu_pre", 0, RK2928_CLKGATE_CON(7), 15, GFLAGS),
+ 
+ 	/* PD_BUS */
+ 	GATE(0, "sclk_initmem_mbist", "aclk_cpu", 0, RK2928_CLKGATE_CON(8), 1, GFLAGS),
+diff --git a/drivers/clk/ti/clkctrl.c b/drivers/clk/ti/clkctrl.c
+index 062266034d84..9019624e37bc 100644
+--- a/drivers/clk/ti/clkctrl.c
++++ b/drivers/clk/ti/clkctrl.c
+@@ -461,7 +461,6 @@ static char * __init clkctrl_get_name(struct device_node *np)
+ 			return name;
+ 		}
+ 	}
+-	of_node_put(np);
+ 
+ 	return NULL;
+ }
+diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
+index c81e1ff29069..b4c014464a20 100644
+--- a/drivers/cpufreq/intel_pstate.c
++++ b/drivers/cpufreq/intel_pstate.c
+@@ -1058,7 +1058,7 @@ static ssize_t store_no_turbo(struct kobject *a, struct kobj_attribute *b,
+ 
+ 	update_turbo_state();
+ 	if (global.turbo_disabled) {
+-		pr_warn("Turbo disabled by BIOS or unavailable on processor\n");
++		pr_notice_once("Turbo disabled by BIOS or unavailable on processor\n");
+ 		mutex_unlock(&intel_pstate_limits_lock);
+ 		mutex_unlock(&intel_pstate_driver_lock);
+ 		return -EPERM;
+diff --git a/drivers/dma/mmp_tdma.c b/drivers/dma/mmp_tdma.c
+index 10117f271b12..d683232d7fea 100644
+--- a/drivers/dma/mmp_tdma.c
++++ b/drivers/dma/mmp_tdma.c
+@@ -363,6 +363,8 @@ static void mmp_tdma_free_descriptor(struct mmp_tdma_chan *tdmac)
+ 		gen_pool_free(gpool, (unsigned long)tdmac->desc_arr,
+ 				size);
+ 	tdmac->desc_arr = NULL;
++	if (tdmac->status == DMA_ERROR)
++		tdmac->status = DMA_COMPLETE;
+ 
+ 	return;
+ }
+@@ -443,7 +445,8 @@ static struct dma_async_tx_descriptor *mmp_tdma_prep_dma_cyclic(
+ 	if (!desc)
+ 		goto err_out;
+ 
+-	mmp_tdma_config_write(chan, direction, &tdmac->slave_config);
++	if (mmp_tdma_config_write(chan, direction, &tdmac->slave_config))
++		goto err_out;
+ 
+ 	while (buf < buf_len) {
+ 		desc = &tdmac->desc_arr[i];
+diff --git a/drivers/dma/pch_dma.c b/drivers/dma/pch_dma.c
+index 581e7a290d98..a3b0b4c56a19 100644
+--- a/drivers/dma/pch_dma.c
++++ b/drivers/dma/pch_dma.c
+@@ -865,6 +865,7 @@ static int pch_dma_probe(struct pci_dev *pdev,
+ 	}
+ 
+ 	pci_set_master(pdev);
++	pd->dma.dev = &pdev->dev;
+ 
+ 	err = request_irq(pdev->irq, pd_irq, IRQF_SHARED, DRV_NAME, pd);
+ 	if (err) {
+@@ -880,7 +881,6 @@ static int pch_dma_probe(struct pci_dev *pdev,
+ 		goto err_free_irq;
+ 	}
+ 
+-	pd->dma.dev = &pdev->dev;
+ 
+ 	INIT_LIST_HEAD(&pd->dma.channels);
+ 
+diff --git a/drivers/dma/xilinx/xilinx_dma.c b/drivers/dma/xilinx/xilinx_dma.c
+index a9c5d5cc9f2b..5d5f1d0ce16c 100644
+--- a/drivers/dma/xilinx/xilinx_dma.c
++++ b/drivers/dma/xilinx/xilinx_dma.c
+@@ -1229,16 +1229,16 @@ static enum dma_status xilinx_dma_tx_status(struct dma_chan *dchan,
+ 		return ret;
+ 
+ 	spin_lock_irqsave(&chan->lock, flags);
+-
+-	desc = list_last_entry(&chan->active_list,
+-			       struct xilinx_dma_tx_descriptor, node);
+-	/*
+-	 * VDMA and simple mode do not support residue reporting, so the
+-	 * residue field will always be 0.
+-	 */
+-	if (chan->has_sg && chan->xdev->dma_config->dmatype != XDMA_TYPE_VDMA)
+-		residue = xilinx_dma_get_residue(chan, desc);
+-
++	if (!list_empty(&chan->active_list)) {
++		desc = list_last_entry(&chan->active_list,
++				       struct xilinx_dma_tx_descriptor, node);
++		/*
++		 * VDMA and simple mode do not support residue reporting, so the
++		 * residue field will always be 0.
++		 */
++		if (chan->has_sg && chan->xdev->dma_config->dmatype != XDMA_TYPE_VDMA)
++			residue = xilinx_dma_get_residue(chan, desc);
++	}
+ 	spin_unlock_irqrestore(&chan->lock, flags);
+ 
+ 	dma_set_residue(txstate, residue);
+diff --git a/drivers/firmware/efi/tpm.c b/drivers/firmware/efi/tpm.c
+index 31f9f0e369b9..55b031d2c989 100644
+--- a/drivers/firmware/efi/tpm.c
++++ b/drivers/firmware/efi/tpm.c
+@@ -16,7 +16,7 @@
+ int efi_tpm_final_log_size;
+ EXPORT_SYMBOL(efi_tpm_final_log_size);
+ 
+-static int tpm2_calc_event_log_size(void *data, int count, void *size_info)
++static int __init tpm2_calc_event_log_size(void *data, int count, void *size_info)
+ {
+ 	struct tcg_pcr_event2_head *header;
+ 	int event_size, size = 0;
+diff --git a/drivers/gpio/gpio-pca953x.c b/drivers/gpio/gpio-pca953x.c
+index 5638b4e5355f..4269ea9a817e 100644
+--- a/drivers/gpio/gpio-pca953x.c
++++ b/drivers/gpio/gpio-pca953x.c
+@@ -531,7 +531,7 @@ static int pca953x_gpio_set_config(struct gpio_chip *gc, unsigned int offset,
+ {
+ 	struct pca953x_chip *chip = gpiochip_get_data(gc);
+ 
+-	switch (config) {
++	switch (pinconf_to_config_param(config)) {
+ 	case PIN_CONFIG_BIAS_PULL_UP:
+ 	case PIN_CONFIG_BIAS_PULL_DOWN:
+ 		return pca953x_gpio_set_pull_up_down(chip, offset, config);
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
+index 2672dc64a310..6a76ab16500f 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fb.c
+@@ -133,8 +133,7 @@ static int amdgpufb_create_pinned_object(struct amdgpu_fbdev *rfbdev,
+ 	u32 cpp;
+ 	u64 flags = AMDGPU_GEM_CREATE_CPU_ACCESS_REQUIRED |
+ 			       AMDGPU_GEM_CREATE_VRAM_CONTIGUOUS     |
+-			       AMDGPU_GEM_CREATE_VRAM_CLEARED 	     |
+-			       AMDGPU_GEM_CREATE_CPU_GTT_USWC;
++			       AMDGPU_GEM_CREATE_VRAM_CLEARED;
+ 
+ 	info = drm_get_format_info(adev->ddev, mode_cmd);
+ 	cpp = info->cpp[0];
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+index 73337e658aff..906648fca9ef 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+@@ -1177,6 +1177,8 @@ static const struct amdgpu_gfxoff_quirk amdgpu_gfxoff_quirk_list[] = {
+ 	{ 0x1002, 0x15dd, 0x1002, 0x15dd, 0xc8 },
+ 	/* https://bugzilla.kernel.org/show_bug.cgi?id=207171 */
+ 	{ 0x1002, 0x15dd, 0x103c, 0x83e7, 0xd3 },
++	/* GFXOFF is unstable on C6 parts with a VBIOS 113-RAVEN-114 */
++	{ 0x1002, 0x15dd, 0x1002, 0x15dd, 0xc6 },
+ 	{ 0, 0, 0, 0, 0 },
+ };
+ 
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 8136a58deb39..5e27a67fbc58 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -7716,6 +7716,7 @@ static int dm_update_plane_state(struct dc *dc,
+ 	struct drm_crtc_state *old_crtc_state, *new_crtc_state;
+ 	struct dm_crtc_state *dm_new_crtc_state, *dm_old_crtc_state;
+ 	struct dm_plane_state *dm_new_plane_state, *dm_old_plane_state;
++	struct amdgpu_crtc *new_acrtc;
+ 	bool needs_reset;
+ 	int ret = 0;
+ 
+@@ -7725,9 +7726,30 @@ static int dm_update_plane_state(struct dc *dc,
+ 	dm_new_plane_state = to_dm_plane_state(new_plane_state);
+ 	dm_old_plane_state = to_dm_plane_state(old_plane_state);
+ 
+-	/*TODO Implement atomic check for cursor plane */
+-	if (plane->type == DRM_PLANE_TYPE_CURSOR)
++	/*TODO Implement better atomic check for cursor plane */
++	if (plane->type == DRM_PLANE_TYPE_CURSOR) {
++		if (!enable || !new_plane_crtc ||
++			drm_atomic_plane_disabling(plane->state, new_plane_state))
++			return 0;
++
++		new_acrtc = to_amdgpu_crtc(new_plane_crtc);
++
++		if ((new_plane_state->crtc_w > new_acrtc->max_cursor_width) ||
++			(new_plane_state->crtc_h > new_acrtc->max_cursor_height)) {
++			DRM_DEBUG_ATOMIC("Bad cursor size %d x %d\n",
++							 new_plane_state->crtc_w, new_plane_state->crtc_h);
++			return -EINVAL;
++		}
++
++		if (new_plane_state->crtc_x <= -new_acrtc->max_cursor_width ||
++			new_plane_state->crtc_y <= -new_acrtc->max_cursor_height) {
++			DRM_DEBUG_ATOMIC("Bad cursor position %d, %d\n",
++							 new_plane_state->crtc_x, new_plane_state->crtc_y);
++			return -EINVAL;
++		}
++
+ 		return 0;
++	}
+ 
+ 	needs_reset = should_reset_plane(state, plane, old_plane_state,
+ 					 new_plane_state);
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+index fd9e69634c50..1b6c75a4dd60 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+@@ -2885,6 +2885,12 @@ bool dc_link_handle_hpd_rx_irq(struct dc_link *link, union hpd_irq_data *out_hpd
+ 					sizeof(hpd_irq_dpcd_data),
+ 					"Status: ");
+ 
++		for (i = 0; i < MAX_PIPES; i++) {
++			pipe_ctx = &link->dc->current_state->res_ctx.pipe_ctx[i];
++			if (pipe_ctx && pipe_ctx->stream && pipe_ctx->stream->link == link)
++				link->dc->hwss.blank_stream(pipe_ctx);
++		}
++
+ 		for (i = 0; i < MAX_PIPES; i++) {
+ 			pipe_ctx = &link->dc->current_state->res_ctx.pipe_ctx[i];
+ 			if (pipe_ctx && pipe_ctx->stream && pipe_ctx->stream->link == link)
+@@ -2904,6 +2910,12 @@ bool dc_link_handle_hpd_rx_irq(struct dc_link *link, union hpd_irq_data *out_hpd
+ 		if (pipe_ctx->stream->signal == SIGNAL_TYPE_DISPLAY_PORT_MST)
+ 			dc_link_reallocate_mst_payload(link);
+ 
++		for (i = 0; i < MAX_PIPES; i++) {
++			pipe_ctx = &link->dc->current_state->res_ctx.pipe_ctx[i];
++			if (pipe_ctx && pipe_ctx->stream && pipe_ctx->stream->link == link)
++				link->dc->hwss.unblank_stream(pipe_ctx, &previous_link_settings);
++		}
++
+ 		status = false;
+ 		if (out_link_loss)
+ 			*out_link_loss = true;
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_stream.c b/drivers/gpu/drm/amd/display/dc/core/dc_stream.c
+index 6ddbb00ed37a..8c20e9e907b2 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_stream.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_stream.c
+@@ -239,24 +239,24 @@ static void delay_cursor_until_vupdate(struct pipe_ctx *pipe_ctx, struct dc *dc)
+ 	struct dc_stream_state *stream = pipe_ctx->stream;
+ 	unsigned int us_per_line;
+ 
+-	if (stream->ctx->asic_id.chip_family == FAMILY_RV &&
+-			ASICREV_IS_RAVEN(stream->ctx->asic_id.hw_internal_rev)) {
++	if (!dc->hwss.get_vupdate_offset_from_vsync)
++		return;
+ 
+-		vupdate_line = dc->hwss.get_vupdate_offset_from_vsync(pipe_ctx);
+-		if (!dc_stream_get_crtc_position(dc, &stream, 1, &vpos, &nvpos))
+-			return;
++	vupdate_line = dc->hwss.get_vupdate_offset_from_vsync(pipe_ctx);
++	if (!dc_stream_get_crtc_position(dc, &stream, 1, &vpos, &nvpos))
++		return;
+ 
+-		if (vpos >= vupdate_line)
+-			return;
++	if (vpos >= vupdate_line)
++		return;
+ 
+-		us_per_line = stream->timing.h_total * 10000 / stream->timing.pix_clk_100hz;
+-		lines_to_vupdate = vupdate_line - vpos;
+-		us_to_vupdate = lines_to_vupdate * us_per_line;
++	us_per_line =
++		stream->timing.h_total * 10000 / stream->timing.pix_clk_100hz;
++	lines_to_vupdate = vupdate_line - vpos;
++	us_to_vupdate = lines_to_vupdate * us_per_line;
+ 
+-		/* 70 us is a conservative estimate of cursor update time*/
+-		if (us_to_vupdate < 70)
+-			udelay(us_to_vupdate);
+-	}
++	/* 70 us is a conservative estimate of cursor update time*/
++	if (us_to_vupdate < 70)
++		udelay(us_to_vupdate);
+ #endif
+ }
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
+index a444fed94184..ad422e00f9fe 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
+@@ -2306,7 +2306,8 @@ void dcn20_fpga_init_hw(struct dc *dc)
+ 
+ 	REG_UPDATE(DCHUBBUB_GLOBAL_TIMER_CNTL, DCHUBBUB_GLOBAL_TIMER_REFDIV, 2);
+ 	REG_UPDATE(DCHUBBUB_GLOBAL_TIMER_CNTL, DCHUBBUB_GLOBAL_TIMER_ENABLE, 1);
+-	REG_WRITE(REFCLK_CNTL, 0);
++	if (REG(REFCLK_CNTL))
++		REG_WRITE(REFCLK_CNTL, 0);
+ 	//
+ 
+ 
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
+index 33d0a176841a..122d3e734c59 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
+@@ -250,7 +250,7 @@ struct _vcs_dpi_soc_bounding_box_st dcn2_1_soc = {
+ 	.dram_channel_width_bytes = 4,
+ 	.fabric_datapath_to_dcn_data_return_bytes = 32,
+ 	.dcn_downspread_percent = 0.5,
+-	.downspread_percent = 0.5,
++	.downspread_percent = 0.38,
+ 	.dram_page_open_time_ns = 50.0,
+ 	.dram_rw_turnaround_time_ns = 17.5,
+ 	.dram_return_buffer_per_channel_bytes = 8192,
+diff --git a/drivers/gpu/drm/amd/powerplay/amd_powerplay.c b/drivers/gpu/drm/amd/powerplay/amd_powerplay.c
+index c195575366a3..e4e5a53b2b4e 100644
+--- a/drivers/gpu/drm/amd/powerplay/amd_powerplay.c
++++ b/drivers/gpu/drm/amd/powerplay/amd_powerplay.c
+@@ -1435,7 +1435,8 @@ static int pp_get_asic_baco_capability(void *handle, bool *cap)
+ 	if (!hwmgr)
+ 		return -EINVAL;
+ 
+-	if (!hwmgr->pm_en || !hwmgr->hwmgr_func->get_asic_baco_capability)
++	if (!(hwmgr->not_vf && amdgpu_dpm) ||
++		!hwmgr->hwmgr_func->get_asic_baco_capability)
+ 		return 0;
+ 
+ 	mutex_lock(&hwmgr->smu_lock);
+@@ -1469,7 +1470,8 @@ static int pp_set_asic_baco_state(void *handle, int state)
+ 	if (!hwmgr)
+ 		return -EINVAL;
+ 
+-	if (!hwmgr->pm_en || !hwmgr->hwmgr_func->set_asic_baco_state)
++	if (!(hwmgr->not_vf && amdgpu_dpm) ||
++		!hwmgr->hwmgr_func->set_asic_baco_state)
+ 		return 0;
+ 
+ 	mutex_lock(&hwmgr->smu_lock);
+diff --git a/drivers/gpu/drm/i915/display/intel_ddi.c b/drivers/gpu/drm/i915/display/intel_ddi.c
+index 2fe594952748..d3c58026d55e 100644
+--- a/drivers/gpu/drm/i915/display/intel_ddi.c
++++ b/drivers/gpu/drm/i915/display/intel_ddi.c
+@@ -3545,9 +3545,6 @@ static void hsw_ddi_pre_enable_dp(struct intel_encoder *encoder,
+ 	intel_dp_set_link_params(intel_dp, crtc_state->port_clock,
+ 				 crtc_state->lane_count, is_mst);
+ 
+-	intel_dp->regs.dp_tp_ctl = DP_TP_CTL(port);
+-	intel_dp->regs.dp_tp_status = DP_TP_STATUS(port);
+-
+ 	intel_edp_panel_on(intel_dp);
+ 
+ 	intel_ddi_clk_select(encoder, crtc_state);
+@@ -4269,12 +4266,18 @@ void intel_ddi_get_config(struct intel_encoder *encoder,
+ 	struct drm_i915_private *dev_priv = to_i915(encoder->base.dev);
+ 	struct intel_crtc *intel_crtc = to_intel_crtc(pipe_config->uapi.crtc);
+ 	enum transcoder cpu_transcoder = pipe_config->cpu_transcoder;
++	struct intel_dp *intel_dp = enc_to_intel_dp(encoder);
+ 	u32 temp, flags = 0;
+ 
+ 	/* XXX: DSI transcoder paranoia */
+ 	if (WARN_ON(transcoder_is_dsi(cpu_transcoder)))
+ 		return;
+ 
++	if (INTEL_GEN(dev_priv) >= 12) {
++		intel_dp->regs.dp_tp_ctl = TGL_DP_TP_CTL(cpu_transcoder);
++		intel_dp->regs.dp_tp_status = TGL_DP_TP_STATUS(cpu_transcoder);
++	}
++
+ 	intel_dsc_get_config(encoder, pipe_config);
+ 
+ 	temp = I915_READ(TRANS_DDI_FUNC_CTL(cpu_transcoder));
+@@ -4492,6 +4495,7 @@ static const struct drm_encoder_funcs intel_ddi_funcs = {
+ static struct intel_connector *
+ intel_ddi_init_dp_connector(struct intel_digital_port *intel_dig_port)
+ {
++	struct drm_i915_private *dev_priv = to_i915(intel_dig_port->base.base.dev);
+ 	struct intel_connector *connector;
+ 	enum port port = intel_dig_port->base.port;
+ 
+@@ -4502,6 +4506,10 @@ intel_ddi_init_dp_connector(struct intel_digital_port *intel_dig_port)
+ 	intel_dig_port->dp.output_reg = DDI_BUF_CTL(port);
+ 	intel_dig_port->dp.prepare_link_retrain =
+ 		intel_ddi_prepare_link_retrain;
++	if (INTEL_GEN(dev_priv) < 12) {
++		intel_dig_port->dp.regs.dp_tp_ctl = DP_TP_CTL(port);
++		intel_dig_port->dp.regs.dp_tp_status = DP_TP_STATUS(port);
++	}
+ 
+ 	if (!intel_dp_init_connector(intel_dig_port, connector)) {
+ 		kfree(connector);
+diff --git a/drivers/gpu/drm/i915/display/intel_display_power.c b/drivers/gpu/drm/i915/display/intel_display_power.c
+index 46c40db992dd..5895b8c7662e 100644
+--- a/drivers/gpu/drm/i915/display/intel_display_power.c
++++ b/drivers/gpu/drm/i915/display/intel_display_power.c
+@@ -4068,7 +4068,7 @@ static const struct i915_power_well_desc tgl_power_wells[] = {
+ 	{
+ 		.name = "AUX D TBT1",
+ 		.domains = TGL_AUX_D_TBT1_IO_POWER_DOMAINS,
+-		.ops = &hsw_power_well_ops,
++		.ops = &icl_tc_phy_aux_power_well_ops,
+ 		.id = DISP_PW_ID_NONE,
+ 		{
+ 			.hsw.regs = &icl_aux_power_well_regs,
+@@ -4079,7 +4079,7 @@ static const struct i915_power_well_desc tgl_power_wells[] = {
+ 	{
+ 		.name = "AUX E TBT2",
+ 		.domains = TGL_AUX_E_TBT2_IO_POWER_DOMAINS,
+-		.ops = &hsw_power_well_ops,
++		.ops = &icl_tc_phy_aux_power_well_ops,
+ 		.id = DISP_PW_ID_NONE,
+ 		{
+ 			.hsw.regs = &icl_aux_power_well_regs,
+@@ -4090,7 +4090,7 @@ static const struct i915_power_well_desc tgl_power_wells[] = {
+ 	{
+ 		.name = "AUX F TBT3",
+ 		.domains = TGL_AUX_F_TBT3_IO_POWER_DOMAINS,
+-		.ops = &hsw_power_well_ops,
++		.ops = &icl_tc_phy_aux_power_well_ops,
+ 		.id = DISP_PW_ID_NONE,
+ 		{
+ 			.hsw.regs = &icl_aux_power_well_regs,
+@@ -4101,7 +4101,7 @@ static const struct i915_power_well_desc tgl_power_wells[] = {
+ 	{
+ 		.name = "AUX G TBT4",
+ 		.domains = TGL_AUX_G_TBT4_IO_POWER_DOMAINS,
+-		.ops = &hsw_power_well_ops,
++		.ops = &icl_tc_phy_aux_power_well_ops,
+ 		.id = DISP_PW_ID_NONE,
+ 		{
+ 			.hsw.regs = &icl_aux_power_well_regs,
+@@ -4112,7 +4112,7 @@ static const struct i915_power_well_desc tgl_power_wells[] = {
+ 	{
+ 		.name = "AUX H TBT5",
+ 		.domains = TGL_AUX_H_TBT5_IO_POWER_DOMAINS,
+-		.ops = &hsw_power_well_ops,
++		.ops = &icl_tc_phy_aux_power_well_ops,
+ 		.id = DISP_PW_ID_NONE,
+ 		{
+ 			.hsw.regs = &icl_aux_power_well_regs,
+@@ -4123,7 +4123,7 @@ static const struct i915_power_well_desc tgl_power_wells[] = {
+ 	{
+ 		.name = "AUX I TBT6",
+ 		.domains = TGL_AUX_I_TBT6_IO_POWER_DOMAINS,
+-		.ops = &hsw_power_well_ops,
++		.ops = &icl_tc_phy_aux_power_well_ops,
+ 		.id = DISP_PW_ID_NONE,
+ 		{
+ 			.hsw.regs = &icl_aux_power_well_regs,
+diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c
+index c7424e2a04a3..fa3a9e9e0b29 100644
+--- a/drivers/gpu/drm/i915/display/intel_dp.c
++++ b/drivers/gpu/drm/i915/display/intel_dp.c
+@@ -2492,9 +2492,6 @@ static void intel_dp_prepare(struct intel_encoder *encoder,
+ 				 intel_crtc_has_type(pipe_config,
+ 						     INTEL_OUTPUT_DP_MST));
+ 
+-	intel_dp->regs.dp_tp_ctl = DP_TP_CTL(port);
+-	intel_dp->regs.dp_tp_status = DP_TP_STATUS(port);
+-
+ 	/*
+ 	 * There are four kinds of DP registers:
+ 	 *
+@@ -7616,6 +7613,8 @@ bool intel_dp_init(struct drm_i915_private *dev_priv,
+ 
+ 	intel_dig_port->dp.output_reg = output_reg;
+ 	intel_dig_port->max_lanes = 4;
++	intel_dig_port->dp.regs.dp_tp_ctl = DP_TP_CTL(port);
++	intel_dig_port->dp.regs.dp_tp_status = DP_TP_STATUS(port);
+ 
+ 	intel_encoder->type = INTEL_OUTPUT_DP;
+ 	intel_encoder->power_domain = intel_port_to_power_domain(port);
+diff --git a/drivers/gpu/drm/i915/display/intel_fbc.c b/drivers/gpu/drm/i915/display/intel_fbc.c
+index a1048ece541e..b6d5e7defa5b 100644
+--- a/drivers/gpu/drm/i915/display/intel_fbc.c
++++ b/drivers/gpu/drm/i915/display/intel_fbc.c
+@@ -478,8 +478,7 @@ static int intel_fbc_alloc_cfb(struct drm_i915_private *dev_priv,
+ 	if (!ret)
+ 		goto err_llb;
+ 	else if (ret > 1) {
+-		DRM_INFO("Reducing the compressed framebuffer size. This may lead to less power savings than a non-reduced-size. Try to increase stolen memory size if available in BIOS.\n");
+-
++		DRM_INFO_ONCE("Reducing the compressed framebuffer size. This may lead to less power savings than a non-reduced-size. Try to increase stolen memory size if available in BIOS.\n");
+ 	}
+ 
+ 	fbc->threshold = ret;
+diff --git a/drivers/gpu/drm/i915/display/intel_sprite.c b/drivers/gpu/drm/i915/display/intel_sprite.c
+index fca77ec1e0dd..f55404a94eba 100644
+--- a/drivers/gpu/drm/i915/display/intel_sprite.c
++++ b/drivers/gpu/drm/i915/display/intel_sprite.c
+@@ -2754,19 +2754,25 @@ static bool skl_plane_format_mod_supported(struct drm_plane *_plane,
+ 	}
+ }
+ 
+-static bool gen12_plane_supports_mc_ccs(enum plane_id plane_id)
++static bool gen12_plane_supports_mc_ccs(struct drm_i915_private *dev_priv,
++					enum plane_id plane_id)
+ {
++	/* Wa_14010477008:tgl[a0..c0] */
++	if (IS_TGL_REVID(dev_priv, TGL_REVID_A0, TGL_REVID_C0))
++		return false;
++
+ 	return plane_id < PLANE_SPRITE4;
+ }
+ 
+ static bool gen12_plane_format_mod_supported(struct drm_plane *_plane,
+ 					     u32 format, u64 modifier)
+ {
++	struct drm_i915_private *dev_priv = to_i915(_plane->dev);
+ 	struct intel_plane *plane = to_intel_plane(_plane);
+ 
+ 	switch (modifier) {
+ 	case I915_FORMAT_MOD_Y_TILED_GEN12_MC_CCS:
+-		if (!gen12_plane_supports_mc_ccs(plane->id))
++		if (!gen12_plane_supports_mc_ccs(dev_priv, plane->id))
+ 			return false;
+ 		/* fall through */
+ 	case DRM_FORMAT_MOD_LINEAR:
+@@ -2935,9 +2941,10 @@ static const u32 *icl_get_plane_formats(struct drm_i915_private *dev_priv,
+ 	}
+ }
+ 
+-static const u64 *gen12_get_plane_modifiers(enum plane_id plane_id)
++static const u64 *gen12_get_plane_modifiers(struct drm_i915_private *dev_priv,
++					    enum plane_id plane_id)
+ {
+-	if (gen12_plane_supports_mc_ccs(plane_id))
++	if (gen12_plane_supports_mc_ccs(dev_priv, plane_id))
+ 		return gen12_plane_format_modifiers_mc_ccs;
+ 	else
+ 		return gen12_plane_format_modifiers_rc_ccs;
+@@ -3008,7 +3015,7 @@ skl_universal_plane_create(struct drm_i915_private *dev_priv,
+ 
+ 	plane->has_ccs = skl_plane_has_ccs(dev_priv, pipe, plane_id);
+ 	if (INTEL_GEN(dev_priv) >= 12) {
+-		modifiers = gen12_get_plane_modifiers(plane_id);
++		modifiers = gen12_get_plane_modifiers(dev_priv, plane_id);
+ 		plane_funcs = &gen12_plane_funcs;
+ 	} else {
+ 		if (plane->has_ccs)
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_domain.c b/drivers/gpu/drm/i915/gem/i915_gem_domain.c
+index 0cc40e77bbd2..4f96c8788a2e 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_domain.c
++++ b/drivers/gpu/drm/i915/gem/i915_gem_domain.c
+@@ -368,7 +368,6 @@ static void i915_gem_object_bump_inactive_ggtt(struct drm_i915_gem_object *obj)
+ 	struct drm_i915_private *i915 = to_i915(obj->base.dev);
+ 	struct i915_vma *vma;
+ 
+-	GEM_BUG_ON(!i915_gem_object_has_pinned_pages(obj));
+ 	if (!atomic_read(&obj->bind_count))
+ 		return;
+ 
+@@ -400,12 +399,8 @@ static void i915_gem_object_bump_inactive_ggtt(struct drm_i915_gem_object *obj)
+ void
+ i915_gem_object_unpin_from_display_plane(struct i915_vma *vma)
+ {
+-	struct drm_i915_gem_object *obj = vma->obj;
+-
+-	assert_object_held(obj);
+-
+ 	/* Bump the LRU to try and avoid premature eviction whilst flipping  */
+-	i915_gem_object_bump_inactive_ggtt(obj);
++	i915_gem_object_bump_inactive_ggtt(vma->obj);
+ 
+ 	i915_vma_unpin(vma);
+ }
+diff --git a/drivers/gpu/drm/i915/gt/intel_engine.h b/drivers/gpu/drm/i915/gt/intel_engine.h
+index 5df003061e44..beb3211a6249 100644
+--- a/drivers/gpu/drm/i915/gt/intel_engine.h
++++ b/drivers/gpu/drm/i915/gt/intel_engine.h
+@@ -338,13 +338,4 @@ intel_engine_has_preempt_reset(const struct intel_engine_cs *engine)
+ 	return intel_engine_has_preemption(engine);
+ }
+ 
+-static inline bool
+-intel_engine_has_timeslices(const struct intel_engine_cs *engine)
+-{
+-	if (!IS_ACTIVE(CONFIG_DRM_I915_TIMESLICE_DURATION))
+-		return false;
+-
+-	return intel_engine_has_semaphores(engine);
+-}
+-
+ #endif /* _INTEL_RINGBUFFER_H_ */
+diff --git a/drivers/gpu/drm/i915/gt/intel_engine_types.h b/drivers/gpu/drm/i915/gt/intel_engine_types.h
+index 92be41a6903c..4ea067e1508a 100644
+--- a/drivers/gpu/drm/i915/gt/intel_engine_types.h
++++ b/drivers/gpu/drm/i915/gt/intel_engine_types.h
+@@ -473,10 +473,11 @@ struct intel_engine_cs {
+ #define I915_ENGINE_SUPPORTS_STATS   BIT(1)
+ #define I915_ENGINE_HAS_PREEMPTION   BIT(2)
+ #define I915_ENGINE_HAS_SEMAPHORES   BIT(3)
+-#define I915_ENGINE_NEEDS_BREADCRUMB_TASKLET BIT(4)
+-#define I915_ENGINE_IS_VIRTUAL       BIT(5)
+-#define I915_ENGINE_HAS_RELATIVE_MMIO BIT(6)
+-#define I915_ENGINE_REQUIRES_CMD_PARSER BIT(7)
++#define I915_ENGINE_HAS_TIMESLICES   BIT(4)
++#define I915_ENGINE_NEEDS_BREADCRUMB_TASKLET BIT(5)
++#define I915_ENGINE_IS_VIRTUAL       BIT(6)
++#define I915_ENGINE_HAS_RELATIVE_MMIO BIT(7)
++#define I915_ENGINE_REQUIRES_CMD_PARSER BIT(8)
+ 	unsigned int flags;
+ 
+ 	/*
+@@ -573,6 +574,15 @@ intel_engine_has_semaphores(const struct intel_engine_cs *engine)
+ 	return engine->flags & I915_ENGINE_HAS_SEMAPHORES;
+ }
+ 
++static inline bool
++intel_engine_has_timeslices(const struct intel_engine_cs *engine)
++{
++	if (!IS_ACTIVE(CONFIG_DRM_I915_TIMESLICE_DURATION))
++		return false;
++
++	return engine->flags & I915_ENGINE_HAS_TIMESLICES;
++}
++
+ static inline bool
+ intel_engine_needs_breadcrumb_tasklet(const struct intel_engine_cs *engine)
+ {
+diff --git a/drivers/gpu/drm/i915/gt/intel_lrc.c b/drivers/gpu/drm/i915/gt/intel_lrc.c
+index 31455eceeb0c..637c03ee1a57 100644
+--- a/drivers/gpu/drm/i915/gt/intel_lrc.c
++++ b/drivers/gpu/drm/i915/gt/intel_lrc.c
+@@ -1626,6 +1626,9 @@ static void defer_request(struct i915_request *rq, struct list_head * const pl)
+ 			struct i915_request *w =
+ 				container_of(p->waiter, typeof(*w), sched);
+ 
++			if (p->flags & I915_DEPENDENCY_WEAK)
++				continue;
++
+ 			/* Leave semaphores spinning on the other engines */
+ 			if (w->engine != rq->engine)
+ 				continue;
+@@ -4194,8 +4197,11 @@ void intel_execlists_set_default_submission(struct intel_engine_cs *engine)
+ 	engine->flags |= I915_ENGINE_SUPPORTS_STATS;
+ 	if (!intel_vgpu_active(engine->i915)) {
+ 		engine->flags |= I915_ENGINE_HAS_SEMAPHORES;
+-		if (HAS_LOGICAL_RING_PREEMPTION(engine->i915))
++		if (HAS_LOGICAL_RING_PREEMPTION(engine->i915)) {
+ 			engine->flags |= I915_ENGINE_HAS_PREEMPTION;
++			if (IS_ACTIVE(CONFIG_DRM_I915_TIMESLICE_DURATION))
++				engine->flags |= I915_ENGINE_HAS_TIMESLICES;
++		}
+ 	}
+ 
+ 	if (INTEL_GEN(engine->i915) >= 12)
+diff --git a/drivers/gpu/drm/i915/gvt/scheduler.c b/drivers/gpu/drm/i915/gvt/scheduler.c
+index 685d1e04a5ff..709ad181bc94 100644
+--- a/drivers/gpu/drm/i915/gvt/scheduler.c
++++ b/drivers/gpu/drm/i915/gvt/scheduler.c
+@@ -375,7 +375,11 @@ static void set_context_ppgtt_from_shadow(struct intel_vgpu_workload *workload,
+ 		for (i = 0; i < GVT_RING_CTX_NR_PDPS; i++) {
+ 			struct i915_page_directory * const pd =
+ 				i915_pd_entry(ppgtt->pd, i);
+-
++			/* skip now as current i915 ppgtt alloc won't allocate
++			   top level pdp for non 4-level table, won't impact
++			   shadow ppgtt. */
++			if (!pd)
++				break;
+ 			px_dma(pd) = mm->ppgtt_mm.shadow_pdps[i];
+ 		}
+ 	}
+diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
+index 810e3ccd56ec..dff134265112 100644
+--- a/drivers/gpu/drm/i915/i915_drv.h
++++ b/drivers/gpu/drm/i915/i915_drv.h
+@@ -1601,6 +1601,8 @@ IS_SUBPLATFORM(const struct drm_i915_private *i915,
+ 	(IS_ICELAKE(p) && IS_REVID(p, since, until))
+ 
+ #define TGL_REVID_A0		0x0
++#define TGL_REVID_B0		0x1
++#define TGL_REVID_C0		0x2
+ 
+ #define IS_TGL_REVID(p, since, until) \
+ 	(IS_TIGERLAKE(p) && IS_REVID(p, since, until))
+diff --git a/drivers/gpu/drm/i915/i915_gem_evict.c b/drivers/gpu/drm/i915/i915_gem_evict.c
+index 0697bedebeef..d99df9c33708 100644
+--- a/drivers/gpu/drm/i915/i915_gem_evict.c
++++ b/drivers/gpu/drm/i915/i915_gem_evict.c
+@@ -130,6 +130,13 @@ search_again:
+ 	active = NULL;
+ 	INIT_LIST_HEAD(&eviction_list);
+ 	list_for_each_entry_safe(vma, next, &vm->bound_list, vm_link) {
++		if (vma == active) { /* now seen this vma twice */
++			if (flags & PIN_NONBLOCK)
++				break;
++
++			active = ERR_PTR(-EAGAIN);
++		}
++
+ 		/*
+ 		 * We keep this list in a rough least-recently scanned order
+ 		 * of active elements (inactive elements are cheap to reap).
+@@ -145,21 +152,12 @@ search_again:
+ 		 * To notice when we complete one full cycle, we record the
+ 		 * first active element seen, before moving it to the tail.
+ 		 */
+-		if (i915_vma_is_active(vma)) {
+-			if (vma == active) {
+-				if (flags & PIN_NONBLOCK)
+-					break;
+-
+-				active = ERR_PTR(-EAGAIN);
+-			}
+-
+-			if (active != ERR_PTR(-EAGAIN)) {
+-				if (!active)
+-					active = vma;
++		if (active != ERR_PTR(-EAGAIN) && i915_vma_is_active(vma)) {
++			if (!active)
++				active = vma;
+ 
+-				list_move_tail(&vma->vm_link, &vm->bound_list);
+-				continue;
+-			}
++			list_move_tail(&vma->vm_link, &vm->bound_list);
++			continue;
+ 		}
+ 
+ 		if (mark_free(&scan, vma, flags, &eviction_list))
+diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
+index c6f02b0b6c7a..52825ae8301b 100644
+--- a/drivers/gpu/drm/i915/i915_irq.c
++++ b/drivers/gpu/drm/i915/i915_irq.c
+@@ -3324,7 +3324,7 @@ static void gen8_de_irq_postinstall(struct drm_i915_private *dev_priv)
+ 	u32 de_pipe_masked = gen8_de_pipe_fault_mask(dev_priv) |
+ 		GEN8_PIPE_CDCLK_CRC_DONE;
+ 	u32 de_pipe_enables;
+-	u32 de_port_masked = GEN8_AUX_CHANNEL_A;
++	u32 de_port_masked = gen8_de_port_aux_mask(dev_priv);
+ 	u32 de_port_enables;
+ 	u32 de_misc_masked = GEN8_DE_EDP_PSR;
+ 	enum pipe pipe;
+@@ -3332,18 +3332,8 @@ static void gen8_de_irq_postinstall(struct drm_i915_private *dev_priv)
+ 	if (INTEL_GEN(dev_priv) <= 10)
+ 		de_misc_masked |= GEN8_DE_MISC_GSE;
+ 
+-	if (INTEL_GEN(dev_priv) >= 9) {
+-		de_port_masked |= GEN9_AUX_CHANNEL_B | GEN9_AUX_CHANNEL_C |
+-				  GEN9_AUX_CHANNEL_D;
+-		if (IS_GEN9_LP(dev_priv))
+-			de_port_masked |= BXT_DE_PORT_GMBUS;
+-	}
+-
+-	if (INTEL_GEN(dev_priv) >= 11)
+-		de_port_masked |= ICL_AUX_CHANNEL_E;
+-
+-	if (IS_CNL_WITH_PORT_F(dev_priv) || INTEL_GEN(dev_priv) >= 11)
+-		de_port_masked |= CNL_AUX_CHANNEL_F;
++	if (IS_GEN9_LP(dev_priv))
++		de_port_masked |= BXT_DE_PORT_GMBUS;
+ 
+ 	de_pipe_enables = de_pipe_masked | GEN8_PIPE_VBLANK |
+ 					   GEN8_PIPE_FIFO_UNDERRUN;
+diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
+index a18b2a244706..32ab154db788 100644
+--- a/drivers/gpu/drm/i915/i915_request.c
++++ b/drivers/gpu/drm/i915/i915_request.c
+@@ -951,7 +951,9 @@ i915_request_await_request(struct i915_request *to, struct i915_request *from)
+ 		return 0;
+ 
+ 	if (to->engine->schedule) {
+-		ret = i915_sched_node_add_dependency(&to->sched, &from->sched);
++		ret = i915_sched_node_add_dependency(&to->sched,
++						     &from->sched,
++						     I915_DEPENDENCY_EXTERNAL);
+ 		if (ret < 0)
+ 			return ret;
+ 	}
+@@ -1084,7 +1086,9 @@ __i915_request_await_execution(struct i915_request *to,
+ 
+ 	/* Couple the dependency tree for PI on this exposed to->fence */
+ 	if (to->engine->schedule) {
+-		err = i915_sched_node_add_dependency(&to->sched, &from->sched);
++		err = i915_sched_node_add_dependency(&to->sched,
++						     &from->sched,
++						     I915_DEPENDENCY_WEAK);
+ 		if (err < 0)
+ 			return err;
+ 	}
+diff --git a/drivers/gpu/drm/i915/i915_scheduler.c b/drivers/gpu/drm/i915/i915_scheduler.c
+index 34b654b4e58a..8e419d897c2b 100644
+--- a/drivers/gpu/drm/i915/i915_scheduler.c
++++ b/drivers/gpu/drm/i915/i915_scheduler.c
+@@ -455,7 +455,8 @@ bool __i915_sched_node_add_dependency(struct i915_sched_node *node,
+ }
+ 
+ int i915_sched_node_add_dependency(struct i915_sched_node *node,
+-				   struct i915_sched_node *signal)
++				   struct i915_sched_node *signal,
++				   unsigned long flags)
+ {
+ 	struct i915_dependency *dep;
+ 
+@@ -464,8 +465,7 @@ int i915_sched_node_add_dependency(struct i915_sched_node *node,
+ 		return -ENOMEM;
+ 
+ 	if (!__i915_sched_node_add_dependency(node, signal, dep,
+-					      I915_DEPENDENCY_EXTERNAL |
+-					      I915_DEPENDENCY_ALLOC))
++					      flags | I915_DEPENDENCY_ALLOC))
+ 		i915_dependency_free(dep);
+ 
+ 	return 0;
+diff --git a/drivers/gpu/drm/i915/i915_scheduler.h b/drivers/gpu/drm/i915/i915_scheduler.h
+index d1dc4efef77b..6f0bf00fc569 100644
+--- a/drivers/gpu/drm/i915/i915_scheduler.h
++++ b/drivers/gpu/drm/i915/i915_scheduler.h
+@@ -34,7 +34,8 @@ bool __i915_sched_node_add_dependency(struct i915_sched_node *node,
+ 				      unsigned long flags);
+ 
+ int i915_sched_node_add_dependency(struct i915_sched_node *node,
+-				   struct i915_sched_node *signal);
++				   struct i915_sched_node *signal,
++				   unsigned long flags);
+ 
+ void i915_sched_node_fini(struct i915_sched_node *node);
+ 
+diff --git a/drivers/gpu/drm/i915/i915_scheduler_types.h b/drivers/gpu/drm/i915/i915_scheduler_types.h
+index d18e70550054..7186875088a0 100644
+--- a/drivers/gpu/drm/i915/i915_scheduler_types.h
++++ b/drivers/gpu/drm/i915/i915_scheduler_types.h
+@@ -78,6 +78,7 @@ struct i915_dependency {
+ 	unsigned long flags;
+ #define I915_DEPENDENCY_ALLOC		BIT(0)
+ #define I915_DEPENDENCY_EXTERNAL	BIT(1)
++#define I915_DEPENDENCY_WEAK		BIT(2)
+ };
+ 
+ #endif /* _I915_SCHEDULER_TYPES_H_ */
+diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
+index bd2d30ecc030..53c7b1a1b355 100644
+--- a/drivers/gpu/drm/i915/intel_pm.c
++++ b/drivers/gpu/drm/i915/intel_pm.c
+@@ -4722,7 +4722,7 @@ static void skl_compute_plane_wm(const struct intel_crtc_state *crtc_state,
+ 	 * WaIncreaseLatencyIPCEnabled: kbl,cfl
+ 	 * Display WA #1141: kbl,cfl
+ 	 */
+-	if ((IS_KABYLAKE(dev_priv) || IS_COFFEELAKE(dev_priv)) ||
++	if ((IS_KABYLAKE(dev_priv) || IS_COFFEELAKE(dev_priv)) &&
+ 	    dev_priv->ipc_enabled)
+ 		latency += 4;
+ 
+diff --git a/drivers/gpu/drm/qxl/qxl_image.c b/drivers/gpu/drm/qxl/qxl_image.c
+index 43688ecdd8a0..60ab7151b84d 100644
+--- a/drivers/gpu/drm/qxl/qxl_image.c
++++ b/drivers/gpu/drm/qxl/qxl_image.c
+@@ -212,7 +212,8 @@ qxl_image_init_helper(struct qxl_device *qdev,
+ 		break;
+ 	default:
+ 		DRM_ERROR("unsupported image bit depth\n");
+-		return -EINVAL; /* TODO: cleanup */
++		qxl_bo_kunmap_atomic_page(qdev, image_bo, ptr);
++		return -EINVAL;
+ 	}
+ 	image->u.bitmap.flags = QXL_BITMAP_TOP_DOWN;
+ 	image->u.bitmap.x = width;
+diff --git a/drivers/gpu/drm/sun4i/sun6i_mipi_dsi.c b/drivers/gpu/drm/sun4i/sun6i_mipi_dsi.c
+index a75fcb113172..2b6d77ca3dfc 100644
+--- a/drivers/gpu/drm/sun4i/sun6i_mipi_dsi.c
++++ b/drivers/gpu/drm/sun4i/sun6i_mipi_dsi.c
+@@ -719,7 +719,7 @@ static void sun6i_dsi_encoder_enable(struct drm_encoder *encoder)
+ 	struct drm_display_mode *mode = &encoder->crtc->state->adjusted_mode;
+ 	struct sun6i_dsi *dsi = encoder_to_sun6i_dsi(encoder);
+ 	struct mipi_dsi_device *device = dsi->device;
+-	union phy_configure_opts opts = { 0 };
++	union phy_configure_opts opts = { };
+ 	struct phy_configure_opts_mipi_dphy *cfg = &opts.mipi_dphy;
+ 	u16 delay;
+ 
+diff --git a/drivers/gpu/drm/tegra/drm.c b/drivers/gpu/drm/tegra/drm.c
+index bd268028fb3d..583cd6e0ae27 100644
+--- a/drivers/gpu/drm/tegra/drm.c
++++ b/drivers/gpu/drm/tegra/drm.c
+@@ -1039,6 +1039,7 @@ void tegra_drm_free(struct tegra_drm *tegra, size_t size, void *virt,
+ 
+ static bool host1x_drm_wants_iommu(struct host1x_device *dev)
+ {
++	struct host1x *host1x = dev_get_drvdata(dev->dev.parent);
+ 	struct iommu_domain *domain;
+ 
+ 	/*
+@@ -1076,7 +1077,7 @@ static bool host1x_drm_wants_iommu(struct host1x_device *dev)
+ 	 * sufficient and whether or not the host1x is attached to an IOMMU
+ 	 * doesn't matter.
+ 	 */
+-	if (!domain && dma_get_mask(dev->dev.parent) <= DMA_BIT_MASK(32))
++	if (!domain && host1x_get_dma_mask(host1x) <= DMA_BIT_MASK(32))
+ 		return true;
+ 
+ 	return domain != NULL;
+diff --git a/drivers/gpu/host1x/dev.c b/drivers/gpu/host1x/dev.c
+index 388bcc2889aa..40a4b9f8b861 100644
+--- a/drivers/gpu/host1x/dev.c
++++ b/drivers/gpu/host1x/dev.c
+@@ -502,6 +502,19 @@ static void __exit tegra_host1x_exit(void)
+ }
+ module_exit(tegra_host1x_exit);
+ 
++/**
++ * host1x_get_dma_mask() - query the supported DMA mask for host1x
++ * @host1x: host1x instance
++ *
++ * Note that this returns the supported DMA mask for host1x, which can be
++ * different from the applicable DMA mask under certain circumstances.
++ */
++u64 host1x_get_dma_mask(struct host1x *host1x)
++{
++	return host1x->info->dma_mask;
++}
++EXPORT_SYMBOL(host1x_get_dma_mask);
++
+ MODULE_AUTHOR("Thierry Reding <thierry.reding@avionic-design.de>");
+ MODULE_AUTHOR("Terje Bergstrom <tbergstrom@nvidia.com>");
+ MODULE_DESCRIPTION("Host1x driver for Tegra products");
+diff --git a/drivers/hwmon/da9052-hwmon.c b/drivers/hwmon/da9052-hwmon.c
+index 53b517dbe7e6..4af2fc309c28 100644
+--- a/drivers/hwmon/da9052-hwmon.c
++++ b/drivers/hwmon/da9052-hwmon.c
+@@ -244,9 +244,9 @@ static ssize_t da9052_tsi_show(struct device *dev,
+ 	int channel = to_sensor_dev_attr(devattr)->index;
+ 	int ret;
+ 
+-	mutex_lock(&hwmon->hwmon_lock);
++	mutex_lock(&hwmon->da9052->auxadc_lock);
+ 	ret = __da9052_read_tsi(dev, channel);
+-	mutex_unlock(&hwmon->hwmon_lock);
++	mutex_unlock(&hwmon->da9052->auxadc_lock);
+ 
+ 	if (ret < 0)
+ 		return ret;
+diff --git a/drivers/hwmon/drivetemp.c b/drivers/hwmon/drivetemp.c
+index 9179460c2d9d..0d4f3d97ffc6 100644
+--- a/drivers/hwmon/drivetemp.c
++++ b/drivers/hwmon/drivetemp.c
+@@ -346,7 +346,7 @@ static int drivetemp_identify_sata(struct drivetemp_data *st)
+ 	st->have_temp_highest = temp_is_valid(buf[SCT_STATUS_TEMP_HIGHEST]);
+ 
+ 	if (!have_sct_data_table)
+-		goto skip_sct;
++		goto skip_sct_data;
+ 
+ 	/* Request and read temperature history table */
+ 	memset(buf, '\0', sizeof(st->smartdata));
+diff --git a/drivers/infiniband/core/cache.c b/drivers/infiniband/core/cache.c
+index 17bfedd24cc3..4619629b958c 100644
+--- a/drivers/infiniband/core/cache.c
++++ b/drivers/infiniband/core/cache.c
+@@ -1536,8 +1536,11 @@ int ib_cache_setup_one(struct ib_device *device)
+ 	if (err)
+ 		return err;
+ 
+-	rdma_for_each_port (device, p)
+-		ib_cache_update(device, p, true);
++	rdma_for_each_port (device, p) {
++		err = ib_cache_update(device, p, true);
++		if (err)
++			return err;
++	}
+ 
+ 	return 0;
+ }
+diff --git a/drivers/infiniband/core/nldev.c b/drivers/infiniband/core/nldev.c
+index 9eec26d10d7b..e16105be2eb2 100644
+--- a/drivers/infiniband/core/nldev.c
++++ b/drivers/infiniband/core/nldev.c
+@@ -1292,11 +1292,10 @@ static int res_get_common_doit(struct sk_buff *skb, struct nlmsghdr *nlh,
+ 	has_cap_net_admin = netlink_capable(skb, CAP_NET_ADMIN);
+ 
+ 	ret = fill_func(msg, has_cap_net_admin, res, port);
+-
+-	rdma_restrack_put(res);
+ 	if (ret)
+ 		goto err_free;
+ 
++	rdma_restrack_put(res);
+ 	nlmsg_end(msg, nlh);
+ 	ib_device_put(device);
+ 	return rdma_nl_unicast(sock_net(skb->sk), msg, NETLINK_CB(skb).portid);
+diff --git a/drivers/infiniband/core/rdma_core.c b/drivers/infiniband/core/rdma_core.c
+index 177333d8bcda..bf8e149d3191 100644
+--- a/drivers/infiniband/core/rdma_core.c
++++ b/drivers/infiniband/core/rdma_core.c
+@@ -459,7 +459,8 @@ alloc_begin_fd_uobject(const struct uverbs_api_object *obj,
+ 	struct ib_uobject *uobj;
+ 	struct file *filp;
+ 
+-	if (WARN_ON(fd_type->fops->release != &uverbs_uobject_fd_release))
++	if (WARN_ON(fd_type->fops->release != &uverbs_uobject_fd_release &&
++		    fd_type->fops->release != &uverbs_async_event_release))
+ 		return ERR_PTR(-EINVAL);
+ 
+ 	new_fd = get_unused_fd_flags(O_CLOEXEC);
+diff --git a/drivers/infiniband/core/uverbs.h b/drivers/infiniband/core/uverbs.h
+index 7df71983212d..3d189c7ee59e 100644
+--- a/drivers/infiniband/core/uverbs.h
++++ b/drivers/infiniband/core/uverbs.h
+@@ -219,6 +219,7 @@ void ib_uverbs_init_event_queue(struct ib_uverbs_event_queue *ev_queue);
+ void ib_uverbs_init_async_event_file(struct ib_uverbs_async_event_file *ev_file);
+ void ib_uverbs_free_event_queue(struct ib_uverbs_event_queue *event_queue);
+ void ib_uverbs_flow_resources_free(struct ib_uflow_resources *uflow_res);
++int uverbs_async_event_release(struct inode *inode, struct file *filp);
+ 
+ int ib_alloc_ucontext(struct uverbs_attr_bundle *attrs);
+ int ib_init_ucontext(struct uverbs_attr_bundle *attrs);
+@@ -227,6 +228,9 @@ void ib_uverbs_release_ucq(struct ib_uverbs_completion_event_file *ev_file,
+ 			   struct ib_ucq_object *uobj);
+ void ib_uverbs_release_uevent(struct ib_uevent_object *uobj);
+ void ib_uverbs_release_file(struct kref *ref);
++void ib_uverbs_async_handler(struct ib_uverbs_async_event_file *async_file,
++			     __u64 element, __u64 event,
++			     struct list_head *obj_list, u32 *counter);
+ 
+ void ib_uverbs_comp_handler(struct ib_cq *cq, void *cq_context);
+ void ib_uverbs_cq_event_handler(struct ib_event *event, void *context_ptr);
+diff --git a/drivers/infiniband/core/uverbs_main.c b/drivers/infiniband/core/uverbs_main.c
+index 17fc25db0311..1bab8de14757 100644
+--- a/drivers/infiniband/core/uverbs_main.c
++++ b/drivers/infiniband/core/uverbs_main.c
+@@ -346,7 +346,7 @@ const struct file_operations uverbs_async_event_fops = {
+ 	.owner	 = THIS_MODULE,
+ 	.read	 = ib_uverbs_async_event_read,
+ 	.poll    = ib_uverbs_async_event_poll,
+-	.release = uverbs_uobject_fd_release,
++	.release = uverbs_async_event_release,
+ 	.fasync  = ib_uverbs_async_event_fasync,
+ 	.llseek	 = no_llseek,
+ };
+@@ -386,10 +386,9 @@ void ib_uverbs_comp_handler(struct ib_cq *cq, void *cq_context)
+ 	kill_fasync(&ev_queue->async_queue, SIGIO, POLL_IN);
+ }
+ 
+-static void
+-ib_uverbs_async_handler(struct ib_uverbs_async_event_file *async_file,
+-			__u64 element, __u64 event, struct list_head *obj_list,
+-			u32 *counter)
++void ib_uverbs_async_handler(struct ib_uverbs_async_event_file *async_file,
++			     __u64 element, __u64 event,
++			     struct list_head *obj_list, u32 *counter)
+ {
+ 	struct ib_uverbs_event *entry;
+ 	unsigned long flags;
+@@ -1187,9 +1186,6 @@ static void ib_uverbs_free_hw_resources(struct ib_uverbs_device *uverbs_dev,
+ 		 */
+ 		mutex_unlock(&uverbs_dev->lists_mutex);
+ 
+-		ib_uverbs_async_handler(READ_ONCE(file->async_file), 0,
+-					IB_EVENT_DEVICE_FATAL, NULL, NULL);
+-
+ 		uverbs_destroy_ufile_hw(file, RDMA_REMOVE_DRIVER_REMOVE);
+ 		kref_put(&file->ref, ib_uverbs_release_file);
+ 
+diff --git a/drivers/infiniband/core/uverbs_std_types_async_fd.c b/drivers/infiniband/core/uverbs_std_types_async_fd.c
+index 82ec0806b34b..61899eaf1f91 100644
+--- a/drivers/infiniband/core/uverbs_std_types_async_fd.c
++++ b/drivers/infiniband/core/uverbs_std_types_async_fd.c
+@@ -26,10 +26,38 @@ static int uverbs_async_event_destroy_uobj(struct ib_uobject *uobj,
+ 		container_of(uobj, struct ib_uverbs_async_event_file, uobj);
+ 
+ 	ib_unregister_event_handler(&event_file->event_handler);
+-	ib_uverbs_free_event_queue(&event_file->ev_queue);
++
++	if (why == RDMA_REMOVE_DRIVER_REMOVE)
++		ib_uverbs_async_handler(event_file, 0, IB_EVENT_DEVICE_FATAL,
++					NULL, NULL);
+ 	return 0;
+ }
+ 
++int uverbs_async_event_release(struct inode *inode, struct file *filp)
++{
++	struct ib_uverbs_async_event_file *event_file;
++	struct ib_uobject *uobj = filp->private_data;
++	int ret;
++
++	if (!uobj)
++		return uverbs_uobject_fd_release(inode, filp);
++
++	event_file =
++		container_of(uobj, struct ib_uverbs_async_event_file, uobj);
++
++	/*
++	 * The async event FD has to deliver IB_EVENT_DEVICE_FATAL even after
++	 * disassociation, so cleaning the event list must only happen after
++	 * release. The user knows it has reached the end of the event stream
++	 * when it sees IB_EVENT_DEVICE_FATAL.
++	 */
++	uverbs_uobject_get(uobj);
++	ret = uverbs_uobject_fd_release(inode, filp);
++	ib_uverbs_free_event_queue(&event_file->ev_queue);
++	uverbs_uobject_put(uobj);
++	return ret;
++}
++
+ DECLARE_UVERBS_NAMED_METHOD(
+ 	UVERBS_METHOD_ASYNC_EVENT_ALLOC,
+ 	UVERBS_ATTR_FD(UVERBS_ATTR_ASYNC_EVENT_ALLOC_FD_HANDLE,
+diff --git a/drivers/infiniband/hw/cxgb4/cm.c b/drivers/infiniband/hw/cxgb4/cm.c
+index d69dece3b1d5..30e08bcc9afb 100644
+--- a/drivers/infiniband/hw/cxgb4/cm.c
++++ b/drivers/infiniband/hw/cxgb4/cm.c
+@@ -2891,8 +2891,7 @@ static int peer_abort(struct c4iw_dev *dev, struct sk_buff *skb)
+ 			srqidx = ABORT_RSS_SRQIDX_G(
+ 					be32_to_cpu(req->srqidx_status));
+ 			if (srqidx) {
+-				complete_cached_srq_buffers(ep,
+-							    req->srqidx_status);
++				complete_cached_srq_buffers(ep, srqidx);
+ 			} else {
+ 				/* Hold ep ref until finish_peer_abort() */
+ 				c4iw_get_ep(&ep->com);
+@@ -3878,8 +3877,8 @@ static int read_tcb_rpl(struct c4iw_dev *dev, struct sk_buff *skb)
+ 		return 0;
+ 	}
+ 
+-	ep->srqe_idx = t4_tcb_get_field32(tcb, TCB_RQ_START_W, TCB_RQ_START_W,
+-			TCB_RQ_START_S);
++	ep->srqe_idx = t4_tcb_get_field32(tcb, TCB_RQ_START_W, TCB_RQ_START_M,
++					  TCB_RQ_START_S);
+ cleanup:
+ 	pr_debug("ep %p tid %u %016x\n", ep, ep->hwtid, ep->srqe_idx);
+ 
+diff --git a/drivers/infiniband/hw/hfi1/user_sdma.c b/drivers/infiniband/hw/hfi1/user_sdma.c
+index 13e4203497b3..a92346e88628 100644
+--- a/drivers/infiniband/hw/hfi1/user_sdma.c
++++ b/drivers/infiniband/hw/hfi1/user_sdma.c
+@@ -589,10 +589,6 @@ int hfi1_user_sdma_process_request(struct hfi1_filedata *fd,
+ 
+ 	set_comp_state(pq, cq, info.comp_idx, QUEUED, 0);
+ 	pq->state = SDMA_PKT_Q_ACTIVE;
+-	/* Send the first N packets in the request to buy us some time */
+-	ret = user_sdma_send_pkts(req, pcount);
+-	if (unlikely(ret < 0 && ret != -EBUSY))
+-		goto free_req;
+ 
+ 	/*
+ 	 * This is a somewhat blocking send implementation.
+diff --git a/drivers/infiniband/hw/i40iw/i40iw_hw.c b/drivers/infiniband/hw/i40iw/i40iw_hw.c
+index 55a1fbf0e670..ae8b97c30665 100644
+--- a/drivers/infiniband/hw/i40iw/i40iw_hw.c
++++ b/drivers/infiniband/hw/i40iw/i40iw_hw.c
+@@ -534,7 +534,7 @@ void i40iw_manage_arp_cache(struct i40iw_device *iwdev,
+ 	int arp_index;
+ 
+ 	arp_index = i40iw_arp_table(iwdev, ip_addr, ipv4, mac_addr, action);
+-	if (arp_index == -1)
++	if (arp_index < 0)
+ 		return;
+ 	cqp_request = i40iw_get_cqp_request(&iwdev->cqp, false);
+ 	if (!cqp_request)
+diff --git a/drivers/infiniband/hw/mlx4/qp.c b/drivers/infiniband/hw/mlx4/qp.c
+index 26425dd2d960..a2b1f6af5ba3 100644
+--- a/drivers/infiniband/hw/mlx4/qp.c
++++ b/drivers/infiniband/hw/mlx4/qp.c
+@@ -2891,6 +2891,7 @@ static int build_sriov_qp0_header(struct mlx4_ib_sqp *sqp,
+ 	int send_size;
+ 	int header_size;
+ 	int spc;
++	int err;
+ 	int i;
+ 
+ 	if (wr->wr.opcode != IB_WR_SEND)
+@@ -2925,7 +2926,9 @@ static int build_sriov_qp0_header(struct mlx4_ib_sqp *sqp,
+ 
+ 	sqp->ud_header.lrh.virtual_lane    = 0;
+ 	sqp->ud_header.bth.solicited_event = !!(wr->wr.send_flags & IB_SEND_SOLICITED);
+-	ib_get_cached_pkey(ib_dev, sqp->qp.port, 0, &pkey);
++	err = ib_get_cached_pkey(ib_dev, sqp->qp.port, 0, &pkey);
++	if (err)
++		return err;
+ 	sqp->ud_header.bth.pkey = cpu_to_be16(pkey);
+ 	if (sqp->qp.mlx4_ib_qp_type == MLX4_IB_QPT_TUN_SMI_OWNER)
+ 		sqp->ud_header.bth.destination_qpn = cpu_to_be32(wr->remote_qpn);
+@@ -3212,9 +3215,14 @@ static int build_mlx_header(struct mlx4_ib_sqp *sqp, const struct ib_ud_wr *wr,
+ 	}
+ 	sqp->ud_header.bth.solicited_event = !!(wr->wr.send_flags & IB_SEND_SOLICITED);
+ 	if (!sqp->qp.ibqp.qp_num)
+-		ib_get_cached_pkey(ib_dev, sqp->qp.port, sqp->pkey_index, &pkey);
++		err = ib_get_cached_pkey(ib_dev, sqp->qp.port, sqp->pkey_index,
++					 &pkey);
+ 	else
+-		ib_get_cached_pkey(ib_dev, sqp->qp.port, wr->pkey_index, &pkey);
++		err = ib_get_cached_pkey(ib_dev, sqp->qp.port, wr->pkey_index,
++					 &pkey);
++	if (err)
++		return err;
++
+ 	sqp->ud_header.bth.pkey = cpu_to_be16(pkey);
+ 	sqp->ud_header.bth.destination_qpn = cpu_to_be32(wr->remote_qpn);
+ 	sqp->ud_header.bth.psn = cpu_to_be32((sqp->send_psn++) & ((1 << 24) - 1));
+diff --git a/drivers/infiniband/sw/rxe/rxe_mmap.c b/drivers/infiniband/sw/rxe/rxe_mmap.c
+index 48f48122ddcb..6a413d73b95d 100644
+--- a/drivers/infiniband/sw/rxe/rxe_mmap.c
++++ b/drivers/infiniband/sw/rxe/rxe_mmap.c
+@@ -151,7 +151,7 @@ struct rxe_mmap_info *rxe_create_mmap_info(struct rxe_dev *rxe, u32 size,
+ 
+ 	ip = kmalloc(sizeof(*ip), GFP_KERNEL);
+ 	if (!ip)
+-		return NULL;
++		return ERR_PTR(-ENOMEM);
+ 
+ 	size = PAGE_ALIGN(size);
+ 
+diff --git a/drivers/infiniband/sw/rxe/rxe_queue.c b/drivers/infiniband/sw/rxe/rxe_queue.c
+index ff92704de32f..245040c3a35d 100644
+--- a/drivers/infiniband/sw/rxe/rxe_queue.c
++++ b/drivers/infiniband/sw/rxe/rxe_queue.c
+@@ -45,12 +45,15 @@ int do_mmap_info(struct rxe_dev *rxe, struct mminfo __user *outbuf,
+ 
+ 	if (outbuf) {
+ 		ip = rxe_create_mmap_info(rxe, buf_size, udata, buf);
+-		if (!ip)
++		if (IS_ERR(ip)) {
++			err = PTR_ERR(ip);
+ 			goto err1;
++		}
+ 
+-		err = copy_to_user(outbuf, &ip->info, sizeof(ip->info));
+-		if (err)
++		if (copy_to_user(outbuf, &ip->info, sizeof(ip->info))) {
++			err = -EFAULT;
+ 			goto err2;
++		}
+ 
+ 		spin_lock_bh(&rxe->pending_lock);
+ 		list_add(&ip->pending_mmaps, &rxe->pending_mmaps);
+@@ -64,7 +67,7 @@ int do_mmap_info(struct rxe_dev *rxe, struct mminfo __user *outbuf,
+ err2:
+ 	kfree(ip);
+ err1:
+-	return -EINVAL;
++	return err;
+ }
+ 
+ inline void rxe_queue_reset(struct rxe_queue *q)
+diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
+index 20cce366e951..500d0a8c966f 100644
+--- a/drivers/iommu/amd_iommu.c
++++ b/drivers/iommu/amd_iommu.c
+@@ -101,6 +101,8 @@ struct kmem_cache *amd_iommu_irq_cache;
+ static void update_domain(struct protection_domain *domain);
+ static int protection_domain_init(struct protection_domain *domain);
+ static void detach_device(struct device *dev);
++static void update_and_flush_device_table(struct protection_domain *domain,
++					  struct domain_pgtable *pgtable);
+ 
+ /****************************************************************************
+  *
+@@ -151,6 +153,26 @@ static struct protection_domain *to_pdomain(struct iommu_domain *dom)
+ 	return container_of(dom, struct protection_domain, domain);
+ }
+ 
++static void amd_iommu_domain_get_pgtable(struct protection_domain *domain,
++					 struct domain_pgtable *pgtable)
++{
++	u64 pt_root = atomic64_read(&domain->pt_root);
++
++	pgtable->root = (u64 *)(pt_root & PAGE_MASK);
++	pgtable->mode = pt_root & 7; /* lowest 3 bits encode pgtable mode */
++}
++
++static u64 amd_iommu_domain_encode_pgtable(u64 *root, int mode)
++{
++	u64 pt_root;
++
++	/* lowest 3 bits encode pgtable mode */
++	pt_root = mode & 7;
++	pt_root |= (u64)root;
++
++	return pt_root;
++}
++
+ static struct iommu_dev_data *alloc_dev_data(u16 devid)
+ {
+ 	struct iommu_dev_data *dev_data;
+@@ -1397,13 +1419,18 @@ static struct page *free_sub_pt(unsigned long root, int mode,
+ 
+ static void free_pagetable(struct protection_domain *domain)
+ {
+-	unsigned long root = (unsigned long)domain->pt_root;
++	struct domain_pgtable pgtable;
+ 	struct page *freelist = NULL;
++	unsigned long root;
+ 
+-	BUG_ON(domain->mode < PAGE_MODE_NONE ||
+-	       domain->mode > PAGE_MODE_6_LEVEL);
++	amd_iommu_domain_get_pgtable(domain, &pgtable);
++	atomic64_set(&domain->pt_root, 0);
+ 
+-	freelist = free_sub_pt(root, domain->mode, freelist);
++	BUG_ON(pgtable.mode < PAGE_MODE_NONE ||
++	       pgtable.mode > PAGE_MODE_6_LEVEL);
++
++	root = (unsigned long)pgtable.root;
++	freelist = free_sub_pt(root, pgtable.mode, freelist);
+ 
+ 	free_page_list(freelist);
+ }
+@@ -1417,24 +1444,36 @@ static bool increase_address_space(struct protection_domain *domain,
+ 				   unsigned long address,
+ 				   gfp_t gfp)
+ {
++	struct domain_pgtable pgtable;
+ 	unsigned long flags;
+ 	bool ret = false;
+-	u64 *pte;
++	u64 *pte, root;
+ 
+ 	spin_lock_irqsave(&domain->lock, flags);
+ 
+-	if (address <= PM_LEVEL_SIZE(domain->mode) ||
+-	    WARN_ON_ONCE(domain->mode == PAGE_MODE_6_LEVEL))
++	amd_iommu_domain_get_pgtable(domain, &pgtable);
++
++	if (address <= PM_LEVEL_SIZE(pgtable.mode) ||
++	    WARN_ON_ONCE(pgtable.mode == PAGE_MODE_6_LEVEL))
+ 		goto out;
+ 
+ 	pte = (void *)get_zeroed_page(gfp);
+ 	if (!pte)
+ 		goto out;
+ 
+-	*pte             = PM_LEVEL_PDE(domain->mode,
+-					iommu_virt_to_phys(domain->pt_root));
+-	domain->pt_root  = pte;
+-	domain->mode    += 1;
++	*pte = PM_LEVEL_PDE(pgtable.mode, iommu_virt_to_phys(pgtable.root));
++
++	pgtable.root  = pte;
++	pgtable.mode += 1;
++	update_and_flush_device_table(domain, &pgtable);
++	domain_flush_complete(domain);
++
++	/*
++	 * Device Table needs to be updated and flushed before the new root can
++	 * be published.
++	 */
++	root = amd_iommu_domain_encode_pgtable(pte, pgtable.mode);
++	atomic64_set(&domain->pt_root, root);
+ 
+ 	ret = true;
+ 
+@@ -1451,16 +1490,22 @@ static u64 *alloc_pte(struct protection_domain *domain,
+ 		      gfp_t gfp,
+ 		      bool *updated)
+ {
++	struct domain_pgtable pgtable;
+ 	int level, end_lvl;
+ 	u64 *pte, *page;
+ 
+ 	BUG_ON(!is_power_of_2(page_size));
+ 
+-	while (address > PM_LEVEL_SIZE(domain->mode))
++	amd_iommu_domain_get_pgtable(domain, &pgtable);
++
++	while (address > PM_LEVEL_SIZE(pgtable.mode)) {
+ 		*updated = increase_address_space(domain, address, gfp) || *updated;
++		amd_iommu_domain_get_pgtable(domain, &pgtable);
++	}
++
+ 
+-	level   = domain->mode - 1;
+-	pte     = &domain->pt_root[PM_LEVEL_INDEX(level, address)];
++	level   = pgtable.mode - 1;
++	pte     = &pgtable.root[PM_LEVEL_INDEX(level, address)];
+ 	address = PAGE_SIZE_ALIGN(address, page_size);
+ 	end_lvl = PAGE_SIZE_LEVEL(page_size);
+ 
+@@ -1536,16 +1581,19 @@ static u64 *fetch_pte(struct protection_domain *domain,
+ 		      unsigned long address,
+ 		      unsigned long *page_size)
+ {
++	struct domain_pgtable pgtable;
+ 	int level;
+ 	u64 *pte;
+ 
+ 	*page_size = 0;
+ 
+-	if (address > PM_LEVEL_SIZE(domain->mode))
++	amd_iommu_domain_get_pgtable(domain, &pgtable);
++
++	if (address > PM_LEVEL_SIZE(pgtable.mode))
+ 		return NULL;
+ 
+-	level	   =  domain->mode - 1;
+-	pte	   = &domain->pt_root[PM_LEVEL_INDEX(level, address)];
++	level	   =  pgtable.mode - 1;
++	pte	   = &pgtable.root[PM_LEVEL_INDEX(level, address)];
+ 	*page_size =  PTE_LEVEL_PAGE_SIZE(level);
+ 
+ 	while (level > 0) {
+@@ -1806,6 +1854,7 @@ static void dma_ops_domain_free(struct protection_domain *domain)
+ static struct protection_domain *dma_ops_domain_alloc(void)
+ {
+ 	struct protection_domain *domain;
++	u64 *pt_root, root;
+ 
+ 	domain = kzalloc(sizeof(struct protection_domain), GFP_KERNEL);
+ 	if (!domain)
+@@ -1814,12 +1863,14 @@ static struct protection_domain *dma_ops_domain_alloc(void)
+ 	if (protection_domain_init(domain))
+ 		goto free_domain;
+ 
+-	domain->mode = PAGE_MODE_3_LEVEL;
+-	domain->pt_root = (void *)get_zeroed_page(GFP_KERNEL);
+-	domain->flags = PD_DMA_OPS_MASK;
+-	if (!domain->pt_root)
++	pt_root = (void *)get_zeroed_page(GFP_KERNEL);
++	if (!pt_root)
+ 		goto free_domain;
+ 
++	root = amd_iommu_domain_encode_pgtable(pt_root, PAGE_MODE_3_LEVEL);
++	atomic64_set(&domain->pt_root, root);
++	domain->flags = PD_DMA_OPS_MASK;
++
+ 	if (iommu_get_dma_cookie(&domain->domain) == -ENOMEM)
+ 		goto free_domain;
+ 
+@@ -1841,16 +1892,17 @@ static bool dma_ops_domain(struct protection_domain *domain)
+ }
+ 
+ static void set_dte_entry(u16 devid, struct protection_domain *domain,
++			  struct domain_pgtable *pgtable,
+ 			  bool ats, bool ppr)
+ {
+ 	u64 pte_root = 0;
+ 	u64 flags = 0;
+ 	u32 old_domid;
+ 
+-	if (domain->mode != PAGE_MODE_NONE)
+-		pte_root = iommu_virt_to_phys(domain->pt_root);
++	if (pgtable->mode != PAGE_MODE_NONE)
++		pte_root = iommu_virt_to_phys(pgtable->root);
+ 
+-	pte_root |= (domain->mode & DEV_ENTRY_MODE_MASK)
++	pte_root |= (pgtable->mode & DEV_ENTRY_MODE_MASK)
+ 		    << DEV_ENTRY_MODE_SHIFT;
+ 	pte_root |= DTE_FLAG_IR | DTE_FLAG_IW | DTE_FLAG_V | DTE_FLAG_TV;
+ 
+@@ -1923,6 +1975,7 @@ static void clear_dte_entry(u16 devid)
+ static void do_attach(struct iommu_dev_data *dev_data,
+ 		      struct protection_domain *domain)
+ {
++	struct domain_pgtable pgtable;
+ 	struct amd_iommu *iommu;
+ 	bool ats;
+ 
+@@ -1938,7 +1991,9 @@ static void do_attach(struct iommu_dev_data *dev_data,
+ 	domain->dev_cnt                 += 1;
+ 
+ 	/* Update device table */
+-	set_dte_entry(dev_data->devid, domain, ats, dev_data->iommu_v2);
++	amd_iommu_domain_get_pgtable(domain, &pgtable);
++	set_dte_entry(dev_data->devid, domain, &pgtable,
++		      ats, dev_data->iommu_v2);
+ 	clone_aliases(dev_data->pdev);
+ 
+ 	device_flush_dte(dev_data);
+@@ -2249,22 +2304,34 @@ static int amd_iommu_domain_get_attr(struct iommu_domain *domain,
+  *
+  *****************************************************************************/
+ 
+-static void update_device_table(struct protection_domain *domain)
++static void update_device_table(struct protection_domain *domain,
++				struct domain_pgtable *pgtable)
+ {
+ 	struct iommu_dev_data *dev_data;
+ 
+ 	list_for_each_entry(dev_data, &domain->dev_list, list) {
+-		set_dte_entry(dev_data->devid, domain, dev_data->ats.enabled,
+-			      dev_data->iommu_v2);
++		set_dte_entry(dev_data->devid, domain, pgtable,
++			      dev_data->ats.enabled, dev_data->iommu_v2);
+ 		clone_aliases(dev_data->pdev);
+ 	}
+ }
+ 
++static void update_and_flush_device_table(struct protection_domain *domain,
++					  struct domain_pgtable *pgtable)
++{
++	update_device_table(domain, pgtable);
++	domain_flush_devices(domain);
++}
++
+ static void update_domain(struct protection_domain *domain)
+ {
+-	update_device_table(domain);
++	struct domain_pgtable pgtable;
+ 
+-	domain_flush_devices(domain);
++	/* Update device table */
++	amd_iommu_domain_get_pgtable(domain, &pgtable);
++	update_and_flush_device_table(domain, &pgtable);
++
++	/* Flush domain TLB(s) and wait for completion */
+ 	domain_flush_tlb_pde(domain);
+ }
+ 
+@@ -2375,6 +2442,7 @@ out_err:
+ static struct iommu_domain *amd_iommu_domain_alloc(unsigned type)
+ {
+ 	struct protection_domain *pdomain;
++	u64 *pt_root, root;
+ 
+ 	switch (type) {
+ 	case IOMMU_DOMAIN_UNMANAGED:
+@@ -2382,13 +2450,15 @@ static struct iommu_domain *amd_iommu_domain_alloc(unsigned type)
+ 		if (!pdomain)
+ 			return NULL;
+ 
+-		pdomain->mode    = PAGE_MODE_3_LEVEL;
+-		pdomain->pt_root = (void *)get_zeroed_page(GFP_KERNEL);
+-		if (!pdomain->pt_root) {
++		pt_root = (void *)get_zeroed_page(GFP_KERNEL);
++		if (!pt_root) {
+ 			protection_domain_free(pdomain);
+ 			return NULL;
+ 		}
+ 
++		root = amd_iommu_domain_encode_pgtable(pt_root, PAGE_MODE_3_LEVEL);
++		atomic64_set(&pdomain->pt_root, root);
++
+ 		pdomain->domain.geometry.aperture_start = 0;
+ 		pdomain->domain.geometry.aperture_end   = ~0ULL;
+ 		pdomain->domain.geometry.force_aperture = true;
+@@ -2406,7 +2476,7 @@ static struct iommu_domain *amd_iommu_domain_alloc(unsigned type)
+ 		if (!pdomain)
+ 			return NULL;
+ 
+-		pdomain->mode = PAGE_MODE_NONE;
++		atomic64_set(&pdomain->pt_root, PAGE_MODE_NONE);
+ 		break;
+ 	default:
+ 		return NULL;
+@@ -2418,6 +2488,7 @@ static struct iommu_domain *amd_iommu_domain_alloc(unsigned type)
+ static void amd_iommu_domain_free(struct iommu_domain *dom)
+ {
+ 	struct protection_domain *domain;
++	struct domain_pgtable pgtable;
+ 
+ 	domain = to_pdomain(dom);
+ 
+@@ -2435,7 +2506,9 @@ static void amd_iommu_domain_free(struct iommu_domain *dom)
+ 		dma_ops_domain_free(domain);
+ 		break;
+ 	default:
+-		if (domain->mode != PAGE_MODE_NONE)
++		amd_iommu_domain_get_pgtable(domain, &pgtable);
++
++		if (pgtable.mode != PAGE_MODE_NONE)
+ 			free_pagetable(domain);
+ 
+ 		if (domain->flags & PD_IOMMUV2_MASK)
+@@ -2518,10 +2591,12 @@ static int amd_iommu_map(struct iommu_domain *dom, unsigned long iova,
+ 			 gfp_t gfp)
+ {
+ 	struct protection_domain *domain = to_pdomain(dom);
++	struct domain_pgtable pgtable;
+ 	int prot = 0;
+ 	int ret;
+ 
+-	if (domain->mode == PAGE_MODE_NONE)
++	amd_iommu_domain_get_pgtable(domain, &pgtable);
++	if (pgtable.mode == PAGE_MODE_NONE)
+ 		return -EINVAL;
+ 
+ 	if (iommu_prot & IOMMU_READ)
+@@ -2541,8 +2616,10 @@ static size_t amd_iommu_unmap(struct iommu_domain *dom, unsigned long iova,
+ 			      struct iommu_iotlb_gather *gather)
+ {
+ 	struct protection_domain *domain = to_pdomain(dom);
++	struct domain_pgtable pgtable;
+ 
+-	if (domain->mode == PAGE_MODE_NONE)
++	amd_iommu_domain_get_pgtable(domain, &pgtable);
++	if (pgtable.mode == PAGE_MODE_NONE)
+ 		return 0;
+ 
+ 	return iommu_unmap_page(domain, iova, page_size);
+@@ -2553,9 +2630,11 @@ static phys_addr_t amd_iommu_iova_to_phys(struct iommu_domain *dom,
+ {
+ 	struct protection_domain *domain = to_pdomain(dom);
+ 	unsigned long offset_mask, pte_pgsize;
++	struct domain_pgtable pgtable;
+ 	u64 *pte, __pte;
+ 
+-	if (domain->mode == PAGE_MODE_NONE)
++	amd_iommu_domain_get_pgtable(domain, &pgtable);
++	if (pgtable.mode == PAGE_MODE_NONE)
+ 		return iova;
+ 
+ 	pte = fetch_pte(domain, iova, &pte_pgsize);
+@@ -2708,16 +2787,26 @@ EXPORT_SYMBOL(amd_iommu_unregister_ppr_notifier);
+ void amd_iommu_domain_direct_map(struct iommu_domain *dom)
+ {
+ 	struct protection_domain *domain = to_pdomain(dom);
++	struct domain_pgtable pgtable;
+ 	unsigned long flags;
++	u64 pt_root;
+ 
+ 	spin_lock_irqsave(&domain->lock, flags);
+ 
++	/* First save pgtable configuration*/
++	amd_iommu_domain_get_pgtable(domain, &pgtable);
++
+ 	/* Update data structure */
+-	domain->mode    = PAGE_MODE_NONE;
++	pt_root = amd_iommu_domain_encode_pgtable(NULL, PAGE_MODE_NONE);
++	atomic64_set(&domain->pt_root, pt_root);
+ 
+ 	/* Make changes visible to IOMMUs */
+ 	update_domain(domain);
+ 
++	/* Restore old pgtable in domain->ptroot to free page-table */
++	pt_root = amd_iommu_domain_encode_pgtable(pgtable.root, pgtable.mode);
++	atomic64_set(&domain->pt_root, pt_root);
++
+ 	/* Page-table is not visible to IOMMU anymore, so free it */
+ 	free_pagetable(domain);
+ 
+@@ -2908,9 +2997,11 @@ static u64 *__get_gcr3_pte(u64 *root, int level, int pasid, bool alloc)
+ static int __set_gcr3(struct protection_domain *domain, int pasid,
+ 		      unsigned long cr3)
+ {
++	struct domain_pgtable pgtable;
+ 	u64 *pte;
+ 
+-	if (domain->mode != PAGE_MODE_NONE)
++	amd_iommu_domain_get_pgtable(domain, &pgtable);
++	if (pgtable.mode != PAGE_MODE_NONE)
+ 		return -EINVAL;
+ 
+ 	pte = __get_gcr3_pte(domain->gcr3_tbl, domain->glx, pasid, true);
+@@ -2924,9 +3015,11 @@ static int __set_gcr3(struct protection_domain *domain, int pasid,
+ 
+ static int __clear_gcr3(struct protection_domain *domain, int pasid)
+ {
++	struct domain_pgtable pgtable;
+ 	u64 *pte;
+ 
+-	if (domain->mode != PAGE_MODE_NONE)
++	amd_iommu_domain_get_pgtable(domain, &pgtable);
++	if (pgtable.mode != PAGE_MODE_NONE)
+ 		return -EINVAL;
+ 
+ 	pte = __get_gcr3_pte(domain->gcr3_tbl, domain->glx, pasid, false);
+diff --git a/drivers/iommu/amd_iommu_types.h b/drivers/iommu/amd_iommu_types.h
+index ca8c4522045b..7a8fdec138bd 100644
+--- a/drivers/iommu/amd_iommu_types.h
++++ b/drivers/iommu/amd_iommu_types.h
+@@ -468,8 +468,7 @@ struct protection_domain {
+ 				       iommu core code */
+ 	spinlock_t lock;	/* mostly used to lock the page table*/
+ 	u16 id;			/* the domain id written to the device table */
+-	int mode;		/* paging mode (0-6 levels) */
+-	u64 *pt_root;		/* page table root pointer */
++	atomic64_t pt_root;	/* pgtable root and pgtable mode */
+ 	int glx;		/* Number of levels for GCR3 table */
+ 	u64 *gcr3_tbl;		/* Guest CR3 table */
+ 	unsigned long flags;	/* flags to find out type of domain */
+@@ -477,6 +476,12 @@ struct protection_domain {
+ 	unsigned dev_iommu[MAX_IOMMUS]; /* per-IOMMU reference count */
+ };
+ 
++/* For decocded pt_root */
++struct domain_pgtable {
++	int mode;
++	u64 *root;
++};
++
+ /*
+  * Structure where we save information about one hardware AMD IOMMU in the
+  * system.
+diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
+index 663d87924e5e..32db16f6debc 100644
+--- a/drivers/mmc/core/block.c
++++ b/drivers/mmc/core/block.c
+@@ -1417,6 +1417,7 @@ static void mmc_blk_cqe_complete_rq(struct mmc_queue *mq, struct request *req)
+ 	struct mmc_request *mrq = &mqrq->brq.mrq;
+ 	struct request_queue *q = req->q;
+ 	struct mmc_host *host = mq->card->host;
++	enum mmc_issue_type issue_type = mmc_issue_type(mq, req);
+ 	unsigned long flags;
+ 	bool put_card;
+ 	int err;
+@@ -1446,7 +1447,7 @@ static void mmc_blk_cqe_complete_rq(struct mmc_queue *mq, struct request *req)
+ 
+ 	spin_lock_irqsave(&mq->lock, flags);
+ 
+-	mq->in_flight[mmc_issue_type(mq, req)] -= 1;
++	mq->in_flight[issue_type] -= 1;
+ 
+ 	put_card = (mmc_tot_in_flight(mq) == 0);
+ 
+diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
+index 9edc08685e86..9c0ccb3744c2 100644
+--- a/drivers/mmc/core/queue.c
++++ b/drivers/mmc/core/queue.c
+@@ -107,11 +107,10 @@ static enum blk_eh_timer_return mmc_cqe_timed_out(struct request *req)
+ 	case MMC_ISSUE_DCMD:
+ 		if (host->cqe_ops->cqe_timeout(host, mrq, &recovery_needed)) {
+ 			if (recovery_needed)
+-				__mmc_cqe_recovery_notifier(mq);
++				mmc_cqe_recovery_notifier(mrq);
+ 			return BLK_EH_RESET_TIMER;
+ 		}
+-		/* No timeout (XXX: huh? comment doesn't make much sense) */
+-		blk_mq_complete_request(req);
++		/* The request has gone already */
+ 		return BLK_EH_DONE;
+ 	default:
+ 		/* Timeout is handled by mmc core */
+@@ -125,18 +124,13 @@ static enum blk_eh_timer_return mmc_mq_timed_out(struct request *req,
+ 	struct request_queue *q = req->q;
+ 	struct mmc_queue *mq = q->queuedata;
+ 	unsigned long flags;
+-	int ret;
++	bool ignore_tout;
+ 
+ 	spin_lock_irqsave(&mq->lock, flags);
+-
+-	if (mq->recovery_needed || !mq->use_cqe)
+-		ret = BLK_EH_RESET_TIMER;
+-	else
+-		ret = mmc_cqe_timed_out(req);
+-
++	ignore_tout = mq->recovery_needed || !mq->use_cqe;
+ 	spin_unlock_irqrestore(&mq->lock, flags);
+ 
+-	return ret;
++	return ignore_tout ? BLK_EH_RESET_TIMER : mmc_cqe_timed_out(req);
+ }
+ 
+ static void mmc_mq_recovery_handler(struct work_struct *work)
+diff --git a/drivers/mmc/host/alcor.c b/drivers/mmc/host/alcor.c
+index 1aee485d56d4..026ca9194ce5 100644
+--- a/drivers/mmc/host/alcor.c
++++ b/drivers/mmc/host/alcor.c
+@@ -1104,7 +1104,7 @@ static int alcor_pci_sdmmc_drv_probe(struct platform_device *pdev)
+ 
+ 	if (ret) {
+ 		dev_err(&pdev->dev, "Failed to get irq for data line\n");
+-		return ret;
++		goto free_host;
+ 	}
+ 
+ 	mutex_init(&host->cmd_mutex);
+@@ -1116,6 +1116,10 @@ static int alcor_pci_sdmmc_drv_probe(struct platform_device *pdev)
+ 	dev_set_drvdata(&pdev->dev, host);
+ 	mmc_add_host(mmc);
+ 	return 0;
++
++free_host:
++	mmc_free_host(mmc);
++	return ret;
+ }
+ 
+ static int alcor_pci_sdmmc_drv_remove(struct platform_device *pdev)
+diff --git a/drivers/mmc/host/sdhci-acpi.c b/drivers/mmc/host/sdhci-acpi.c
+index 2a2173d953f5..7da47196c596 100644
+--- a/drivers/mmc/host/sdhci-acpi.c
++++ b/drivers/mmc/host/sdhci-acpi.c
+@@ -605,10 +605,12 @@ static int sdhci_acpi_emmc_amd_probe_slot(struct platform_device *pdev,
+ }
+ 
+ static const struct sdhci_acpi_slot sdhci_acpi_slot_amd_emmc = {
+-	.chip   = &sdhci_acpi_chip_amd,
+-	.caps   = MMC_CAP_8_BIT_DATA | MMC_CAP_NONREMOVABLE,
+-	.quirks = SDHCI_QUIRK_32BIT_DMA_ADDR | SDHCI_QUIRK_32BIT_DMA_SIZE |
+-			SDHCI_QUIRK_32BIT_ADMA_SIZE,
++	.chip		= &sdhci_acpi_chip_amd,
++	.caps		= MMC_CAP_8_BIT_DATA | MMC_CAP_NONREMOVABLE,
++	.quirks		= SDHCI_QUIRK_32BIT_DMA_ADDR |
++			  SDHCI_QUIRK_32BIT_DMA_SIZE |
++			  SDHCI_QUIRK_32BIT_ADMA_SIZE,
++	.quirks2	= SDHCI_QUIRK2_BROKEN_64_BIT_DMA,
+ 	.probe_slot     = sdhci_acpi_emmc_amd_probe_slot,
+ };
+ 
+diff --git a/drivers/mmc/host/sdhci-pci-gli.c b/drivers/mmc/host/sdhci-pci-gli.c
+index ce15a05f23d4..fd76aa672e02 100644
+--- a/drivers/mmc/host/sdhci-pci-gli.c
++++ b/drivers/mmc/host/sdhci-pci-gli.c
+@@ -26,6 +26,9 @@
+ #define   SDHCI_GLI_9750_DRIVING_2    GENMASK(27, 26)
+ #define   GLI_9750_DRIVING_1_VALUE    0xFFF
+ #define   GLI_9750_DRIVING_2_VALUE    0x3
++#define   SDHCI_GLI_9750_SEL_1        BIT(29)
++#define   SDHCI_GLI_9750_SEL_2        BIT(31)
++#define   SDHCI_GLI_9750_ALL_RST      (BIT(24)|BIT(25)|BIT(28)|BIT(30))
+ 
+ #define SDHCI_GLI_9750_PLL	      0x864
+ #define   SDHCI_GLI_9750_PLL_TX2_INV    BIT(23)
+@@ -122,6 +125,8 @@ static void gli_set_9750(struct sdhci_host *host)
+ 				    GLI_9750_DRIVING_1_VALUE);
+ 	driving_value |= FIELD_PREP(SDHCI_GLI_9750_DRIVING_2,
+ 				    GLI_9750_DRIVING_2_VALUE);
++	driving_value &= ~(SDHCI_GLI_9750_SEL_1|SDHCI_GLI_9750_SEL_2|SDHCI_GLI_9750_ALL_RST);
++	driving_value |= SDHCI_GLI_9750_SEL_2;
+ 	sdhci_writel(host, driving_value, SDHCI_GLI_9750_DRIVING);
+ 
+ 	sw_ctrl_value &= ~SDHCI_GLI_9750_SW_CTRL_4;
+@@ -334,6 +339,18 @@ static u32 sdhci_gl9750_readl(struct sdhci_host *host, int reg)
+ 	return value;
+ }
+ 
++#ifdef CONFIG_PM_SLEEP
++static int sdhci_pci_gli_resume(struct sdhci_pci_chip *chip)
++{
++	struct sdhci_pci_slot *slot = chip->slots[0];
++
++	pci_free_irq_vectors(slot->chip->pdev);
++	gli_pcie_enable_msi(slot);
++
++	return sdhci_pci_resume_host(chip);
++}
++#endif
++
+ static const struct sdhci_ops sdhci_gl9755_ops = {
+ 	.set_clock		= sdhci_set_clock,
+ 	.enable_dma		= sdhci_pci_enable_dma,
+@@ -348,6 +365,9 @@ const struct sdhci_pci_fixes sdhci_gl9755 = {
+ 	.quirks2	= SDHCI_QUIRK2_BROKEN_DDR50,
+ 	.probe_slot	= gli_probe_slot_gl9755,
+ 	.ops            = &sdhci_gl9755_ops,
++#ifdef CONFIG_PM_SLEEP
++	.resume         = sdhci_pci_gli_resume,
++#endif
+ };
+ 
+ static const struct sdhci_ops sdhci_gl9750_ops = {
+@@ -366,4 +386,7 @@ const struct sdhci_pci_fixes sdhci_gl9750 = {
+ 	.quirks2	= SDHCI_QUIRK2_BROKEN_DDR50,
+ 	.probe_slot	= gli_probe_slot_gl9750,
+ 	.ops            = &sdhci_gl9750_ops,
++#ifdef CONFIG_PM_SLEEP
++	.resume         = sdhci_pci_gli_resume,
++#endif
+ };
+diff --git a/drivers/net/dsa/dsa_loop.c b/drivers/net/dsa/dsa_loop.c
+index fdcb70b9f0e4..400207c5c7de 100644
+--- a/drivers/net/dsa/dsa_loop.c
++++ b/drivers/net/dsa/dsa_loop.c
+@@ -360,6 +360,7 @@ static void __exit dsa_loop_exit(void)
+ }
+ module_exit(dsa_loop_exit);
+ 
++MODULE_SOFTDEP("pre: dsa_loop_bdinfo");
+ MODULE_LICENSE("GPL");
+ MODULE_AUTHOR("Florian Fainelli");
+ MODULE_DESCRIPTION("DSA loopback driver");
+diff --git a/drivers/net/dsa/ocelot/felix.c b/drivers/net/dsa/ocelot/felix.c
+index 9e895ab586d5..a7780c06fa65 100644
+--- a/drivers/net/dsa/ocelot/felix.c
++++ b/drivers/net/dsa/ocelot/felix.c
+@@ -397,6 +397,7 @@ static int felix_init_structs(struct felix *felix, int num_phys_ports)
+ 	ocelot->stats_layout	= felix->info->stats_layout;
+ 	ocelot->num_stats	= felix->info->num_stats;
+ 	ocelot->shared_queue_sz	= felix->info->shared_queue_sz;
++	ocelot->num_mact_rows	= felix->info->num_mact_rows;
+ 	ocelot->ops		= felix->info->ops;
+ 
+ 	port_phy_modes = kcalloc(num_phys_ports, sizeof(phy_interface_t),
+diff --git a/drivers/net/dsa/ocelot/felix.h b/drivers/net/dsa/ocelot/felix.h
+index 3a7580015b62..8771d40324f1 100644
+--- a/drivers/net/dsa/ocelot/felix.h
++++ b/drivers/net/dsa/ocelot/felix.h
+@@ -15,6 +15,7 @@ struct felix_info {
+ 	const u32 *const		*map;
+ 	const struct ocelot_ops		*ops;
+ 	int				shared_queue_sz;
++	int				num_mact_rows;
+ 	const struct ocelot_stat_layout	*stats_layout;
+ 	unsigned int			num_stats;
+ 	int				num_ports;
+diff --git a/drivers/net/dsa/ocelot/felix_vsc9959.c b/drivers/net/dsa/ocelot/felix_vsc9959.c
+index 2c812b481778..edc1a67c002b 100644
+--- a/drivers/net/dsa/ocelot/felix_vsc9959.c
++++ b/drivers/net/dsa/ocelot/felix_vsc9959.c
+@@ -1090,6 +1090,7 @@ struct felix_info felix_info_vsc9959 = {
+ 	.stats_layout		= vsc9959_stats_layout,
+ 	.num_stats		= ARRAY_SIZE(vsc9959_stats_layout),
+ 	.shared_queue_sz	= 128 * 1024,
++	.num_mact_rows		= 2048,
+ 	.num_ports		= 6,
+ 	.switch_pci_bar		= 4,
+ 	.imdio_pci_bar		= 0,
+diff --git a/drivers/net/ethernet/broadcom/Kconfig b/drivers/net/ethernet/broadcom/Kconfig
+index 53055ce5dfd6..2a69c0d06f3c 100644
+--- a/drivers/net/ethernet/broadcom/Kconfig
++++ b/drivers/net/ethernet/broadcom/Kconfig
+@@ -69,6 +69,7 @@ config BCMGENET
+ 	select BCM7XXX_PHY
+ 	select MDIO_BCM_UNIMAC
+ 	select DIMLIB
++	select BROADCOM_PHY if ARCH_BCM2835
+ 	help
+ 	  This driver supports the built-in Ethernet MACs found in the
+ 	  Broadcom BCM7xxx Set Top Box family chipset.
+diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+index 7ff147e89426..d9bbaa734d98 100644
+--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
++++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
+@@ -86,7 +86,7 @@ static void free_rx_fd(struct dpaa2_eth_priv *priv,
+ 	for (i = 1; i < DPAA2_ETH_MAX_SG_ENTRIES; i++) {
+ 		addr = dpaa2_sg_get_addr(&sgt[i]);
+ 		sg_vaddr = dpaa2_iova_to_virt(priv->iommu_domain, addr);
+-		dma_unmap_page(dev, addr, DPAA2_ETH_RX_BUF_SIZE,
++		dma_unmap_page(dev, addr, priv->rx_buf_size,
+ 			       DMA_BIDIRECTIONAL);
+ 
+ 		free_pages((unsigned long)sg_vaddr, 0);
+@@ -144,7 +144,7 @@ static struct sk_buff *build_frag_skb(struct dpaa2_eth_priv *priv,
+ 		/* Get the address and length from the S/G entry */
+ 		sg_addr = dpaa2_sg_get_addr(sge);
+ 		sg_vaddr = dpaa2_iova_to_virt(priv->iommu_domain, sg_addr);
+-		dma_unmap_page(dev, sg_addr, DPAA2_ETH_RX_BUF_SIZE,
++		dma_unmap_page(dev, sg_addr, priv->rx_buf_size,
+ 			       DMA_BIDIRECTIONAL);
+ 
+ 		sg_length = dpaa2_sg_get_len(sge);
+@@ -185,7 +185,7 @@ static struct sk_buff *build_frag_skb(struct dpaa2_eth_priv *priv,
+ 				(page_address(page) - page_address(head_page));
+ 
+ 			skb_add_rx_frag(skb, i - 1, head_page, page_offset,
+-					sg_length, DPAA2_ETH_RX_BUF_SIZE);
++					sg_length, priv->rx_buf_size);
+ 		}
+ 
+ 		if (dpaa2_sg_is_final(sge))
+@@ -211,7 +211,7 @@ static void free_bufs(struct dpaa2_eth_priv *priv, u64 *buf_array, int count)
+ 
+ 	for (i = 0; i < count; i++) {
+ 		vaddr = dpaa2_iova_to_virt(priv->iommu_domain, buf_array[i]);
+-		dma_unmap_page(dev, buf_array[i], DPAA2_ETH_RX_BUF_SIZE,
++		dma_unmap_page(dev, buf_array[i], priv->rx_buf_size,
+ 			       DMA_BIDIRECTIONAL);
+ 		free_pages((unsigned long)vaddr, 0);
+ 	}
+@@ -335,7 +335,7 @@ static u32 run_xdp(struct dpaa2_eth_priv *priv,
+ 		break;
+ 	case XDP_REDIRECT:
+ 		dma_unmap_page(priv->net_dev->dev.parent, addr,
+-			       DPAA2_ETH_RX_BUF_SIZE, DMA_BIDIRECTIONAL);
++			       priv->rx_buf_size, DMA_BIDIRECTIONAL);
+ 		ch->buf_count--;
+ 		xdp.data_hard_start = vaddr;
+ 		err = xdp_do_redirect(priv->net_dev, &xdp, xdp_prog);
+@@ -374,7 +374,7 @@ static void dpaa2_eth_rx(struct dpaa2_eth_priv *priv,
+ 	trace_dpaa2_rx_fd(priv->net_dev, fd);
+ 
+ 	vaddr = dpaa2_iova_to_virt(priv->iommu_domain, addr);
+-	dma_sync_single_for_cpu(dev, addr, DPAA2_ETH_RX_BUF_SIZE,
++	dma_sync_single_for_cpu(dev, addr, priv->rx_buf_size,
+ 				DMA_BIDIRECTIONAL);
+ 
+ 	fas = dpaa2_get_fas(vaddr, false);
+@@ -393,13 +393,13 @@ static void dpaa2_eth_rx(struct dpaa2_eth_priv *priv,
+ 			return;
+ 		}
+ 
+-		dma_unmap_page(dev, addr, DPAA2_ETH_RX_BUF_SIZE,
++		dma_unmap_page(dev, addr, priv->rx_buf_size,
+ 			       DMA_BIDIRECTIONAL);
+ 		skb = build_linear_skb(ch, fd, vaddr);
+ 	} else if (fd_format == dpaa2_fd_sg) {
+ 		WARN_ON(priv->xdp_prog);
+ 
+-		dma_unmap_page(dev, addr, DPAA2_ETH_RX_BUF_SIZE,
++		dma_unmap_page(dev, addr, priv->rx_buf_size,
+ 			       DMA_BIDIRECTIONAL);
+ 		skb = build_frag_skb(priv, ch, buf_data);
+ 		free_pages((unsigned long)vaddr, 0);
+@@ -974,7 +974,7 @@ static int add_bufs(struct dpaa2_eth_priv *priv,
+ 		if (!page)
+ 			goto err_alloc;
+ 
+-		addr = dma_map_page(dev, page, 0, DPAA2_ETH_RX_BUF_SIZE,
++		addr = dma_map_page(dev, page, 0, priv->rx_buf_size,
+ 				    DMA_BIDIRECTIONAL);
+ 		if (unlikely(dma_mapping_error(dev, addr)))
+ 			goto err_map;
+@@ -984,7 +984,7 @@ static int add_bufs(struct dpaa2_eth_priv *priv,
+ 		/* tracing point */
+ 		trace_dpaa2_eth_buf_seed(priv->net_dev,
+ 					 page, DPAA2_ETH_RX_BUF_RAW_SIZE,
+-					 addr, DPAA2_ETH_RX_BUF_SIZE,
++					 addr, priv->rx_buf_size,
+ 					 bpid);
+ 	}
+ 
+@@ -1715,7 +1715,7 @@ static bool xdp_mtu_valid(struct dpaa2_eth_priv *priv, int mtu)
+ 	int mfl, linear_mfl;
+ 
+ 	mfl = DPAA2_ETH_L2_MAX_FRM(mtu);
+-	linear_mfl = DPAA2_ETH_RX_BUF_SIZE - DPAA2_ETH_RX_HWA_SIZE -
++	linear_mfl = priv->rx_buf_size - DPAA2_ETH_RX_HWA_SIZE -
+ 		     dpaa2_eth_rx_head_room(priv) - XDP_PACKET_HEADROOM;
+ 
+ 	if (mfl > linear_mfl) {
+@@ -2457,6 +2457,11 @@ static int set_buffer_layout(struct dpaa2_eth_priv *priv)
+ 	else
+ 		rx_buf_align = DPAA2_ETH_RX_BUF_ALIGN;
+ 
++	/* We need to ensure that the buffer size seen by WRIOP is a multiple
++	 * of 64 or 256 bytes depending on the WRIOP version.
++	 */
++	priv->rx_buf_size = ALIGN_DOWN(DPAA2_ETH_RX_BUF_SIZE, rx_buf_align);
++
+ 	/* tx buffer */
+ 	buf_layout.private_data_size = DPAA2_ETH_SWA_SIZE;
+ 	buf_layout.pass_timestamp = true;
+@@ -3121,7 +3126,7 @@ static int bind_dpni(struct dpaa2_eth_priv *priv)
+ 	pools_params.num_dpbp = 1;
+ 	pools_params.pools[0].dpbp_id = priv->dpbp_dev->obj_desc.id;
+ 	pools_params.pools[0].backup_pool = 0;
+-	pools_params.pools[0].buffer_size = DPAA2_ETH_RX_BUF_SIZE;
++	pools_params.pools[0].buffer_size = priv->rx_buf_size;
+ 	err = dpni_set_pools(priv->mc_io, 0, priv->mc_token, &pools_params);
+ 	if (err) {
+ 		dev_err(dev, "dpni_set_pools() failed\n");
+diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h
+index 7635db3ef903..13242bf5b427 100644
+--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h
++++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h
+@@ -382,6 +382,7 @@ struct dpaa2_eth_priv {
+ 	u16 tx_data_offset;
+ 
+ 	struct fsl_mc_device *dpbp_dev;
++	u16 rx_buf_size;
+ 	u16 bpid;
+ 	struct iommu_domain *iommu_domain;
+ 
+diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-ethtool.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-ethtool.c
+index 96676abcebd5..c53f091af2cf 100644
+--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-ethtool.c
++++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-ethtool.c
+@@ -625,7 +625,7 @@ static int num_rules(struct dpaa2_eth_priv *priv)
+ 
+ static int update_cls_rule(struct net_device *net_dev,
+ 			   struct ethtool_rx_flow_spec *new_fs,
+-			   int location)
++			   unsigned int location)
+ {
+ 	struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
+ 	struct dpaa2_eth_cls_rule *rule;
+diff --git a/drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.c b/drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.c
+index 8995e32dd1c0..992908e6eebf 100644
+--- a/drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.c
++++ b/drivers/net/ethernet/huawei/hinic/hinic_hw_mgmt.c
+@@ -45,6 +45,8 @@
+ 
+ #define MGMT_MSG_TIMEOUT                5000
+ 
++#define SET_FUNC_PORT_MGMT_TIMEOUT	25000
++
+ #define mgmt_to_pfhwdev(pf_mgmt)        \
+ 		container_of(pf_mgmt, struct hinic_pfhwdev, pf_to_mgmt)
+ 
+@@ -238,12 +240,13 @@ static int msg_to_mgmt_sync(struct hinic_pf_to_mgmt *pf_to_mgmt,
+ 			    u8 *buf_in, u16 in_size,
+ 			    u8 *buf_out, u16 *out_size,
+ 			    enum mgmt_direction_type direction,
+-			    u16 resp_msg_id)
++			    u16 resp_msg_id, u32 timeout)
+ {
+ 	struct hinic_hwif *hwif = pf_to_mgmt->hwif;
+ 	struct pci_dev *pdev = hwif->pdev;
+ 	struct hinic_recv_msg *recv_msg;
+ 	struct completion *recv_done;
++	unsigned long timeo;
+ 	u16 msg_id;
+ 	int err;
+ 
+@@ -267,8 +270,9 @@ static int msg_to_mgmt_sync(struct hinic_pf_to_mgmt *pf_to_mgmt,
+ 		goto unlock_sync_msg;
+ 	}
+ 
+-	if (!wait_for_completion_timeout(recv_done,
+-					 msecs_to_jiffies(MGMT_MSG_TIMEOUT))) {
++	timeo = msecs_to_jiffies(timeout ? timeout : MGMT_MSG_TIMEOUT);
++
++	if (!wait_for_completion_timeout(recv_done, timeo)) {
+ 		dev_err(&pdev->dev, "MGMT timeout, MSG id = %d\n", msg_id);
+ 		err = -ETIMEDOUT;
+ 		goto unlock_sync_msg;
+@@ -342,6 +346,7 @@ int hinic_msg_to_mgmt(struct hinic_pf_to_mgmt *pf_to_mgmt,
+ {
+ 	struct hinic_hwif *hwif = pf_to_mgmt->hwif;
+ 	struct pci_dev *pdev = hwif->pdev;
++	u32 timeout = 0;
+ 
+ 	if (sync != HINIC_MGMT_MSG_SYNC) {
+ 		dev_err(&pdev->dev, "Invalid MGMT msg type\n");
+@@ -353,9 +358,12 @@ int hinic_msg_to_mgmt(struct hinic_pf_to_mgmt *pf_to_mgmt,
+ 		return -EINVAL;
+ 	}
+ 
++	if (cmd == HINIC_PORT_CMD_SET_FUNC_STATE)
++		timeout = SET_FUNC_PORT_MGMT_TIMEOUT;
++
+ 	return msg_to_mgmt_sync(pf_to_mgmt, mod, cmd, buf_in, in_size,
+ 				buf_out, out_size, MGMT_DIRECT_SEND,
+-				MSG_NOT_RESP);
++				MSG_NOT_RESP, timeout);
+ }
+ 
+ /**
+diff --git a/drivers/net/ethernet/huawei/hinic/hinic_main.c b/drivers/net/ethernet/huawei/hinic/hinic_main.c
+index 13560975c103..63b92f6cc856 100644
+--- a/drivers/net/ethernet/huawei/hinic/hinic_main.c
++++ b/drivers/net/ethernet/huawei/hinic/hinic_main.c
+@@ -483,7 +483,6 @@ static int hinic_close(struct net_device *netdev)
+ {
+ 	struct hinic_dev *nic_dev = netdev_priv(netdev);
+ 	unsigned int flags;
+-	int err;
+ 
+ 	down(&nic_dev->mgmt_lock);
+ 
+@@ -497,20 +496,9 @@ static int hinic_close(struct net_device *netdev)
+ 
+ 	up(&nic_dev->mgmt_lock);
+ 
+-	err = hinic_port_set_func_state(nic_dev, HINIC_FUNC_PORT_DISABLE);
+-	if (err) {
+-		netif_err(nic_dev, drv, netdev,
+-			  "Failed to set func port state\n");
+-		nic_dev->flags |= (flags & HINIC_INTF_UP);
+-		return err;
+-	}
++	hinic_port_set_state(nic_dev, HINIC_PORT_DISABLE);
+ 
+-	err = hinic_port_set_state(nic_dev, HINIC_PORT_DISABLE);
+-	if (err) {
+-		netif_err(nic_dev, drv, netdev, "Failed to set port state\n");
+-		nic_dev->flags |= (flags & HINIC_INTF_UP);
+-		return err;
+-	}
++	hinic_port_set_func_state(nic_dev, HINIC_FUNC_PORT_DISABLE);
+ 
+ 	if (nic_dev->flags & HINIC_RSS_ENABLE) {
+ 		hinic_rss_deinit(nic_dev);
+diff --git a/drivers/net/ethernet/moxa/moxart_ether.c b/drivers/net/ethernet/moxa/moxart_ether.c
+index e1651756bf9d..f70bb81e1ed6 100644
+--- a/drivers/net/ethernet/moxa/moxart_ether.c
++++ b/drivers/net/ethernet/moxa/moxart_ether.c
+@@ -564,7 +564,7 @@ static int moxart_remove(struct platform_device *pdev)
+ 	struct net_device *ndev = platform_get_drvdata(pdev);
+ 
+ 	unregister_netdev(ndev);
+-	free_irq(ndev->irq, ndev);
++	devm_free_irq(&pdev->dev, ndev->irq, ndev);
+ 	moxart_mac_free_memory(ndev);
+ 	free_netdev(ndev);
+ 
+diff --git a/drivers/net/ethernet/mscc/ocelot.c b/drivers/net/ethernet/mscc/ocelot.c
+index b14286dc49fb..419e2ce2eac0 100644
+--- a/drivers/net/ethernet/mscc/ocelot.c
++++ b/drivers/net/ethernet/mscc/ocelot.c
+@@ -1016,10 +1016,8 @@ int ocelot_fdb_dump(struct ocelot *ocelot, int port,
+ {
+ 	int i, j;
+ 
+-	/* Loop through all the mac tables entries. There are 1024 rows of 4
+-	 * entries.
+-	 */
+-	for (i = 0; i < 1024; i++) {
++	/* Loop through all the mac tables entries. */
++	for (i = 0; i < ocelot->num_mact_rows; i++) {
+ 		for (j = 0; j < 4; j++) {
+ 			struct ocelot_mact_entry entry;
+ 			bool is_static;
+@@ -1446,8 +1444,15 @@ static void ocelot_port_attr_stp_state_set(struct ocelot *ocelot, int port,
+ 
+ void ocelot_set_ageing_time(struct ocelot *ocelot, unsigned int msecs)
+ {
+-	ocelot_write(ocelot, ANA_AUTOAGE_AGE_PERIOD(msecs / 2),
+-		     ANA_AUTOAGE);
++	unsigned int age_period = ANA_AUTOAGE_AGE_PERIOD(msecs / 2000);
++
++	/* Setting AGE_PERIOD to zero effectively disables automatic aging,
++	 * which is clearly not what our intention is. So avoid that.
++	 */
++	if (!age_period)
++		age_period = 1;
++
++	ocelot_rmw(ocelot, age_period, ANA_AUTOAGE_AGE_PERIOD_M, ANA_AUTOAGE);
+ }
+ EXPORT_SYMBOL(ocelot_set_ageing_time);
+ 
+diff --git a/drivers/net/ethernet/mscc/ocelot_regs.c b/drivers/net/ethernet/mscc/ocelot_regs.c
+index b88b5899b227..7d4fd1b6adda 100644
+--- a/drivers/net/ethernet/mscc/ocelot_regs.c
++++ b/drivers/net/ethernet/mscc/ocelot_regs.c
+@@ -431,6 +431,7 @@ int ocelot_chip_init(struct ocelot *ocelot, const struct ocelot_ops *ops)
+ 	ocelot->stats_layout = ocelot_stats_layout;
+ 	ocelot->num_stats = ARRAY_SIZE(ocelot_stats_layout);
+ 	ocelot->shared_queue_sz = 224 * 1024;
++	ocelot->num_mact_rows = 1024;
+ 	ocelot->ops = ops;
+ 
+ 	ret = ocelot_regfields_init(ocelot, ocelot_regfields);
+diff --git a/drivers/net/ethernet/natsemi/jazzsonic.c b/drivers/net/ethernet/natsemi/jazzsonic.c
+index 51fa82b429a3..40970352d208 100644
+--- a/drivers/net/ethernet/natsemi/jazzsonic.c
++++ b/drivers/net/ethernet/natsemi/jazzsonic.c
+@@ -235,11 +235,13 @@ static int jazz_sonic_probe(struct platform_device *pdev)
+ 
+ 	err = register_netdev(dev);
+ 	if (err)
+-		goto out1;
++		goto undo_probe1;
+ 
+ 	return 0;
+ 
+-out1:
++undo_probe1:
++	dma_free_coherent(lp->device, SIZEOF_SONIC_DESC * SONIC_BUS_SCALE(lp->dma_bitmode),
++			  lp->descriptors, lp->descriptors_laddr);
+ 	release_mem_region(dev->base_addr, SONIC_MEM_SIZE);
+ out:
+ 	free_netdev(dev);
+diff --git a/drivers/net/ethernet/netronome/nfp/abm/main.c b/drivers/net/ethernet/netronome/nfp/abm/main.c
+index 354efffac0f9..bdbf0726145e 100644
+--- a/drivers/net/ethernet/netronome/nfp/abm/main.c
++++ b/drivers/net/ethernet/netronome/nfp/abm/main.c
+@@ -333,8 +333,10 @@ nfp_abm_vnic_alloc(struct nfp_app *app, struct nfp_net *nn, unsigned int id)
+ 		goto err_free_alink;
+ 
+ 	alink->prio_map = kzalloc(abm->prio_map_len, GFP_KERNEL);
+-	if (!alink->prio_map)
++	if (!alink->prio_map) {
++		err = -ENOMEM;
+ 		goto err_free_alink;
++	}
+ 
+ 	/* This is a multi-host app, make sure MAC/PHY is up, but don't
+ 	 * make the MAC/PHY state follow the state of any of the ports.
+diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c
+index 6b633e9d76da..07a6b609f741 100644
+--- a/drivers/net/ethernet/realtek/r8169_main.c
++++ b/drivers/net/ethernet/realtek/r8169_main.c
+@@ -2127,6 +2127,8 @@ static void rtl8169_get_mac_version(struct rtl8169_private *tp)
+ 		{ 0x7cf, 0x348,	RTL_GIGA_MAC_VER_07 },
+ 		{ 0x7cf, 0x248,	RTL_GIGA_MAC_VER_07 },
+ 		{ 0x7cf, 0x340,	RTL_GIGA_MAC_VER_13 },
++		/* RTL8401, reportedly works if treated as RTL8101e */
++		{ 0x7cf, 0x240,	RTL_GIGA_MAC_VER_13 },
+ 		{ 0x7cf, 0x343,	RTL_GIGA_MAC_VER_10 },
+ 		{ 0x7cf, 0x342,	RTL_GIGA_MAC_VER_16 },
+ 		{ 0x7c8, 0x348,	RTL_GIGA_MAC_VER_09 },
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-qcom-ethqos.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-qcom-ethqos.c
+index e0a5fe83d8e0..bfc4a92f1d92 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-qcom-ethqos.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-qcom-ethqos.c
+@@ -75,6 +75,11 @@ struct ethqos_emac_por {
+ 	unsigned int value;
+ };
+ 
++struct ethqos_emac_driver_data {
++	const struct ethqos_emac_por *por;
++	unsigned int num_por;
++};
++
+ struct qcom_ethqos {
+ 	struct platform_device *pdev;
+ 	void __iomem *rgmii_base;
+@@ -171,6 +176,11 @@ static const struct ethqos_emac_por emac_v2_3_0_por[] = {
+ 	{ .offset = RGMII_IO_MACRO_CONFIG2,	.value = 0x00002060 },
+ };
+ 
++static const struct ethqos_emac_driver_data emac_v2_3_0_data = {
++	.por = emac_v2_3_0_por,
++	.num_por = ARRAY_SIZE(emac_v2_3_0_por),
++};
++
+ static int ethqos_dll_configure(struct qcom_ethqos *ethqos)
+ {
+ 	unsigned int val;
+@@ -442,6 +452,7 @@ static int qcom_ethqos_probe(struct platform_device *pdev)
+ 	struct device_node *np = pdev->dev.of_node;
+ 	struct plat_stmmacenet_data *plat_dat;
+ 	struct stmmac_resources stmmac_res;
++	const struct ethqos_emac_driver_data *data;
+ 	struct qcom_ethqos *ethqos;
+ 	struct resource *res;
+ 	int ret;
+@@ -471,7 +482,9 @@ static int qcom_ethqos_probe(struct platform_device *pdev)
+ 		goto err_mem;
+ 	}
+ 
+-	ethqos->por = of_device_get_match_data(&pdev->dev);
++	data = of_device_get_match_data(&pdev->dev);
++	ethqos->por = data->por;
++	ethqos->num_por = data->num_por;
+ 
+ 	ethqos->rgmii_clk = devm_clk_get(&pdev->dev, "rgmii");
+ 	if (IS_ERR(ethqos->rgmii_clk)) {
+@@ -526,7 +539,7 @@ static int qcom_ethqos_remove(struct platform_device *pdev)
+ }
+ 
+ static const struct of_device_id qcom_ethqos_match[] = {
+-	{ .compatible = "qcom,qcs404-ethqos", .data = &emac_v2_3_0_por},
++	{ .compatible = "qcom,qcs404-ethqos", .data = &emac_v2_3_0_data},
+ 	{ }
+ };
+ MODULE_DEVICE_TABLE(of, qcom_ethqos_match);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac5.c b/drivers/net/ethernet/stmicro/stmmac/dwmac5.c
+index 494c859b4ade..67ba67ed0cb9 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac5.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac5.c
+@@ -624,7 +624,7 @@ int dwmac5_est_configure(void __iomem *ioaddr, struct stmmac_est *cfg,
+ 		total_offset += offset;
+ 	}
+ 
+-	total_ctr = cfg->ctr[0] + cfg->ctr[1] * 1000000000;
++	total_ctr = cfg->ctr[0] + cfg->ctr[1] * 1000000000ULL;
+ 	total_ctr += total_offset;
+ 
+ 	ctr_low = do_div(total_ctr, 1000000000);
+diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c
+index 2c0a24c606fc..28a5d46ad526 100644
+--- a/drivers/net/hyperv/netvsc_drv.c
++++ b/drivers/net/hyperv/netvsc_drv.c
+@@ -710,7 +710,8 @@ no_memory:
+ 	goto drop;
+ }
+ 
+-static int netvsc_start_xmit(struct sk_buff *skb, struct net_device *ndev)
++static netdev_tx_t netvsc_start_xmit(struct sk_buff *skb,
++				     struct net_device *ndev)
+ {
+ 	return netvsc_xmit(skb, ndev, false);
+ }
+diff --git a/drivers/net/phy/microchip_t1.c b/drivers/net/phy/microchip_t1.c
+index 001def4509c2..fed3e395f18e 100644
+--- a/drivers/net/phy/microchip_t1.c
++++ b/drivers/net/phy/microchip_t1.c
+@@ -3,9 +3,21 @@
+ 
+ #include <linux/kernel.h>
+ #include <linux/module.h>
++#include <linux/delay.h>
+ #include <linux/mii.h>
+ #include <linux/phy.h>
+ 
++/* External Register Control Register */
++#define LAN87XX_EXT_REG_CTL                     (0x14)
++#define LAN87XX_EXT_REG_CTL_RD_CTL              (0x1000)
++#define LAN87XX_EXT_REG_CTL_WR_CTL              (0x0800)
++
++/* External Register Read Data Register */
++#define LAN87XX_EXT_REG_RD_DATA                 (0x15)
++
++/* External Register Write Data Register */
++#define LAN87XX_EXT_REG_WR_DATA                 (0x16)
++
+ /* Interrupt Source Register */
+ #define LAN87XX_INTERRUPT_SOURCE                (0x18)
+ 
+@@ -14,9 +26,160 @@
+ #define LAN87XX_MASK_LINK_UP                    (0x0004)
+ #define LAN87XX_MASK_LINK_DOWN                  (0x0002)
+ 
++/* phyaccess nested types */
++#define	PHYACC_ATTR_MODE_READ		0
++#define	PHYACC_ATTR_MODE_WRITE		1
++#define	PHYACC_ATTR_MODE_MODIFY		2
++
++#define	PHYACC_ATTR_BANK_SMI		0
++#define	PHYACC_ATTR_BANK_MISC		1
++#define	PHYACC_ATTR_BANK_PCS		2
++#define	PHYACC_ATTR_BANK_AFE		3
++#define	PHYACC_ATTR_BANK_MAX		7
++
+ #define DRIVER_AUTHOR	"Nisar Sayed <nisar.sayed@microchip.com>"
+ #define DRIVER_DESC	"Microchip LAN87XX T1 PHY driver"
+ 
++struct access_ereg_val {
++	u8  mode;
++	u8  bank;
++	u8  offset;
++	u16 val;
++	u16 mask;
++};
++
++static int access_ereg(struct phy_device *phydev, u8 mode, u8 bank,
++		       u8 offset, u16 val)
++{
++	u16 ereg = 0;
++	int rc = 0;
++
++	if (mode > PHYACC_ATTR_MODE_WRITE || bank > PHYACC_ATTR_BANK_MAX)
++		return -EINVAL;
++
++	if (bank == PHYACC_ATTR_BANK_SMI) {
++		if (mode == PHYACC_ATTR_MODE_WRITE)
++			rc = phy_write(phydev, offset, val);
++		else
++			rc = phy_read(phydev, offset);
++		return rc;
++	}
++
++	if (mode == PHYACC_ATTR_MODE_WRITE) {
++		ereg = LAN87XX_EXT_REG_CTL_WR_CTL;
++		rc = phy_write(phydev, LAN87XX_EXT_REG_WR_DATA, val);
++		if (rc < 0)
++			return rc;
++	} else {
++		ereg = LAN87XX_EXT_REG_CTL_RD_CTL;
++	}
++
++	ereg |= (bank << 8) | offset;
++
++	rc = phy_write(phydev, LAN87XX_EXT_REG_CTL, ereg);
++	if (rc < 0)
++		return rc;
++
++	if (mode == PHYACC_ATTR_MODE_READ)
++		rc = phy_read(phydev, LAN87XX_EXT_REG_RD_DATA);
++
++	return rc;
++}
++
++static int access_ereg_modify_changed(struct phy_device *phydev,
++				      u8 bank, u8 offset, u16 val, u16 mask)
++{
++	int new = 0, rc = 0;
++
++	if (bank > PHYACC_ATTR_BANK_MAX)
++		return -EINVAL;
++
++	rc = access_ereg(phydev, PHYACC_ATTR_MODE_READ, bank, offset, val);
++	if (rc < 0)
++		return rc;
++
++	new = val | (rc & (mask ^ 0xFFFF));
++	rc = access_ereg(phydev, PHYACC_ATTR_MODE_WRITE, bank, offset, new);
++
++	return rc;
++}
++
++static int lan87xx_phy_init(struct phy_device *phydev)
++{
++	static const struct access_ereg_val init[] = {
++		/* TX Amplitude = 5 */
++		{PHYACC_ATTR_MODE_MODIFY, PHYACC_ATTR_BANK_AFE, 0x0B,
++		 0x000A, 0x001E},
++		/* Clear SMI interrupts */
++		{PHYACC_ATTR_MODE_READ, PHYACC_ATTR_BANK_SMI, 0x18,
++		 0, 0},
++		/* Clear MISC interrupts */
++		{PHYACC_ATTR_MODE_READ, PHYACC_ATTR_BANK_MISC, 0x08,
++		 0, 0},
++		/* Turn on TC10 Ring Oscillator (ROSC) */
++		{PHYACC_ATTR_MODE_MODIFY, PHYACC_ATTR_BANK_MISC, 0x20,
++		 0x0020, 0x0020},
++		/* WUR Detect Length to 1.2uS, LPC Detect Length to 1.09uS */
++		{PHYACC_ATTR_MODE_WRITE, PHYACC_ATTR_BANK_PCS, 0x20,
++		 0x283C, 0},
++		/* Wake_In Debounce Length to 39uS, Wake_Out Length to 79uS */
++		{PHYACC_ATTR_MODE_WRITE, PHYACC_ATTR_BANK_MISC, 0x21,
++		 0x274F, 0},
++		/* Enable Auto Wake Forward to Wake_Out, ROSC on, Sleep,
++		 * and Wake_In to wake PHY
++		 */
++		{PHYACC_ATTR_MODE_WRITE, PHYACC_ATTR_BANK_MISC, 0x20,
++		 0x80A7, 0},
++		/* Enable WUP Auto Fwd, Enable Wake on MDI, Wakeup Debouncer
++		 * to 128 uS
++		 */
++		{PHYACC_ATTR_MODE_WRITE, PHYACC_ATTR_BANK_MISC, 0x24,
++		 0xF110, 0},
++		/* Enable HW Init */
++		{PHYACC_ATTR_MODE_MODIFY, PHYACC_ATTR_BANK_SMI, 0x1A,
++		 0x0100, 0x0100},
++	};
++	int rc, i;
++
++	/* Start manual initialization procedures in Managed Mode */
++	rc = access_ereg_modify_changed(phydev, PHYACC_ATTR_BANK_SMI,
++					0x1a, 0x0000, 0x0100);
++	if (rc < 0)
++		return rc;
++
++	/* Soft Reset the SMI block */
++	rc = access_ereg_modify_changed(phydev, PHYACC_ATTR_BANK_SMI,
++					0x00, 0x8000, 0x8000);
++	if (rc < 0)
++		return rc;
++
++	/* Check to see if the self-clearing bit is cleared */
++	usleep_range(1000, 2000);
++	rc = access_ereg(phydev, PHYACC_ATTR_MODE_READ,
++			 PHYACC_ATTR_BANK_SMI, 0x00, 0);
++	if (rc < 0)
++		return rc;
++	if ((rc & 0x8000) != 0)
++		return -ETIMEDOUT;
++
++	/* PHY Initialization */
++	for (i = 0; i < ARRAY_SIZE(init); i++) {
++		if (init[i].mode == PHYACC_ATTR_MODE_MODIFY) {
++			rc = access_ereg_modify_changed(phydev, init[i].bank,
++							init[i].offset,
++							init[i].val,
++							init[i].mask);
++		} else {
++			rc = access_ereg(phydev, init[i].mode, init[i].bank,
++					 init[i].offset, init[i].val);
++		}
++		if (rc < 0)
++			return rc;
++	}
++
++	return 0;
++}
++
+ static int lan87xx_phy_config_intr(struct phy_device *phydev)
+ {
+ 	int rc, val = 0;
+@@ -40,6 +203,13 @@ static int lan87xx_phy_ack_interrupt(struct phy_device *phydev)
+ 	return rc < 0 ? rc : 0;
+ }
+ 
++static int lan87xx_config_init(struct phy_device *phydev)
++{
++	int rc = lan87xx_phy_init(phydev);
++
++	return rc < 0 ? rc : 0;
++}
++
+ static struct phy_driver microchip_t1_phy_driver[] = {
+ 	{
+ 		.phy_id         = 0x0007c150,
+@@ -48,6 +218,7 @@ static struct phy_driver microchip_t1_phy_driver[] = {
+ 
+ 		.features       = PHY_BASIC_T1_FEATURES,
+ 
++		.config_init	= lan87xx_config_init,
+ 		.config_aneg    = genphy_config_aneg,
+ 
+ 		.ack_interrupt  = lan87xx_phy_ack_interrupt,
+diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c
+index 355bfdef48d2..594d97d3c8ab 100644
+--- a/drivers/net/phy/phy.c
++++ b/drivers/net/phy/phy.c
+@@ -1132,9 +1132,11 @@ int phy_ethtool_set_eee(struct phy_device *phydev, struct ethtool_eee *data)
+ 		/* Restart autonegotiation so the new modes get sent to the
+ 		 * link partner.
+ 		 */
+-		ret = phy_restart_aneg(phydev);
+-		if (ret < 0)
+-			return ret;
++		if (phydev->autoneg == AUTONEG_ENABLE) {
++			ret = phy_restart_aneg(phydev);
++			if (ret < 0)
++				return ret;
++		}
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/net/ppp/pppoe.c b/drivers/net/ppp/pppoe.c
+index d760a36db28c..beedaad08255 100644
+--- a/drivers/net/ppp/pppoe.c
++++ b/drivers/net/ppp/pppoe.c
+@@ -490,6 +490,9 @@ static int pppoe_disc_rcv(struct sk_buff *skb, struct net_device *dev,
+ 	if (!skb)
+ 		goto out;
+ 
++	if (skb->pkt_type != PACKET_HOST)
++		goto abort;
++
+ 	if (!pskb_may_pull(skb, sizeof(struct pppoe_hdr)))
+ 		goto abort;
+ 
+diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
+index 2fe7a3188282..f7129bc898cc 100644
+--- a/drivers/net/virtio_net.c
++++ b/drivers/net/virtio_net.c
+@@ -1231,9 +1231,11 @@ static bool try_fill_recv(struct virtnet_info *vi, struct receive_queue *rq,
+ 			break;
+ 	} while (rq->vq->num_free);
+ 	if (virtqueue_kick_prepare(rq->vq) && virtqueue_notify(rq->vq)) {
+-		u64_stats_update_begin(&rq->stats.syncp);
++		unsigned long flags;
++
++		flags = u64_stats_update_begin_irqsave(&rq->stats.syncp);
+ 		rq->stats.kicks++;
+-		u64_stats_update_end(&rq->stats.syncp);
++		u64_stats_update_end_irqrestore(&rq->stats.syncp, flags);
+ 	}
+ 
+ 	return !oom;
+diff --git a/drivers/pinctrl/intel/pinctrl-baytrail.c b/drivers/pinctrl/intel/pinctrl-baytrail.c
+index b409642f168d..9b821c9cbd16 100644
+--- a/drivers/pinctrl/intel/pinctrl-baytrail.c
++++ b/drivers/pinctrl/intel/pinctrl-baytrail.c
+@@ -1286,6 +1286,7 @@ static const struct gpio_chip byt_gpio_chip = {
+ 	.direction_output	= byt_gpio_direction_output,
+ 	.get			= byt_gpio_get,
+ 	.set			= byt_gpio_set,
++	.set_config		= gpiochip_generic_config,
+ 	.dbg_show		= byt_gpio_dbg_show,
+ };
+ 
+diff --git a/drivers/pinctrl/intel/pinctrl-cherryview.c b/drivers/pinctrl/intel/pinctrl-cherryview.c
+index 4c74fdde576d..1093a6105d40 100644
+--- a/drivers/pinctrl/intel/pinctrl-cherryview.c
++++ b/drivers/pinctrl/intel/pinctrl-cherryview.c
+@@ -1479,11 +1479,15 @@ static void chv_gpio_irq_handler(struct irq_desc *desc)
+ 	struct chv_pinctrl *pctrl = gpiochip_get_data(gc);
+ 	struct irq_chip *chip = irq_desc_get_chip(desc);
+ 	unsigned long pending;
++	unsigned long flags;
+ 	u32 intr_line;
+ 
+ 	chained_irq_enter(chip, desc);
+ 
++	raw_spin_lock_irqsave(&chv_lock, flags);
+ 	pending = readl(pctrl->regs + CHV_INTSTAT);
++	raw_spin_unlock_irqrestore(&chv_lock, flags);
++
+ 	for_each_set_bit(intr_line, &pending, pctrl->community->nirqs) {
+ 		unsigned int irq, offset;
+ 
+diff --git a/drivers/pinctrl/intel/pinctrl-sunrisepoint.c b/drivers/pinctrl/intel/pinctrl-sunrisepoint.c
+index 330c8f077b73..4d7a86a5a37b 100644
+--- a/drivers/pinctrl/intel/pinctrl-sunrisepoint.c
++++ b/drivers/pinctrl/intel/pinctrl-sunrisepoint.c
+@@ -15,17 +15,18 @@
+ 
+ #include "pinctrl-intel.h"
+ 
+-#define SPT_PAD_OWN	0x020
+-#define SPT_PADCFGLOCK	0x0a0
+-#define SPT_HOSTSW_OWN	0x0d0
+-#define SPT_GPI_IS	0x100
+-#define SPT_GPI_IE	0x120
++#define SPT_PAD_OWN		0x020
++#define SPT_H_PADCFGLOCK	0x090
++#define SPT_LP_PADCFGLOCK	0x0a0
++#define SPT_HOSTSW_OWN		0x0d0
++#define SPT_GPI_IS		0x100
++#define SPT_GPI_IE		0x120
+ 
+ #define SPT_COMMUNITY(b, s, e)				\
+ 	{						\
+ 		.barno = (b),				\
+ 		.padown_offset = SPT_PAD_OWN,		\
+-		.padcfglock_offset = SPT_PADCFGLOCK,	\
++		.padcfglock_offset = SPT_LP_PADCFGLOCK,	\
+ 		.hostown_offset = SPT_HOSTSW_OWN,	\
+ 		.is_offset = SPT_GPI_IS,		\
+ 		.ie_offset = SPT_GPI_IE,		\
+@@ -47,7 +48,7 @@
+ 	{						\
+ 		.barno = (b),				\
+ 		.padown_offset = SPT_PAD_OWN,		\
+-		.padcfglock_offset = SPT_PADCFGLOCK,	\
++		.padcfglock_offset = SPT_H_PADCFGLOCK,	\
+ 		.hostown_offset = SPT_HOSTSW_OWN,	\
+ 		.is_offset = SPT_GPI_IS,		\
+ 		.ie_offset = SPT_GPI_IE,		\
+diff --git a/drivers/pinctrl/qcom/pinctrl-msm.c b/drivers/pinctrl/qcom/pinctrl-msm.c
+index 1a948c3f54b7..9f1c9951949e 100644
+--- a/drivers/pinctrl/qcom/pinctrl-msm.c
++++ b/drivers/pinctrl/qcom/pinctrl-msm.c
+@@ -692,7 +692,7 @@ static void msm_gpio_update_dual_edge_pos(struct msm_pinctrl *pctrl,
+ 
+ 		pol = msm_readl_intr_cfg(pctrl, g);
+ 		pol ^= BIT(g->intr_polarity_bit);
+-		msm_writel_intr_cfg(val, pctrl, g);
++		msm_writel_intr_cfg(pol, pctrl, g);
+ 
+ 		val2 = msm_readl_io(pctrl, g) & BIT(g->in_bit);
+ 		intstat = msm_readl_intr_status(pctrl, g);
+diff --git a/drivers/s390/net/ism_drv.c b/drivers/s390/net/ism_drv.c
+index 4fc2056bd227..e615dc240150 100644
+--- a/drivers/s390/net/ism_drv.c
++++ b/drivers/s390/net/ism_drv.c
+@@ -521,8 +521,10 @@ static int ism_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ 
+ 	ism->smcd = smcd_alloc_dev(&pdev->dev, dev_name(&pdev->dev), &ism_ops,
+ 				   ISM_NR_DMBS);
+-	if (!ism->smcd)
++	if (!ism->smcd) {
++		ret = -ENOMEM;
+ 		goto err_resource;
++	}
+ 
+ 	ism->smcd->priv = ism;
+ 	ret = ism_dev_init(ism);
+diff --git a/drivers/usb/cdns3/gadget.c b/drivers/usb/cdns3/gadget.c
+index 3574dbb09366..a5cccbd5d356 100644
+--- a/drivers/usb/cdns3/gadget.c
++++ b/drivers/usb/cdns3/gadget.c
+@@ -2548,7 +2548,7 @@ found:
+ 	link_trb = priv_req->trb;
+ 
+ 	/* Update ring only if removed request is on pending_req_list list */
+-	if (req_on_hw_ring) {
++	if (req_on_hw_ring && link_trb) {
+ 		link_trb->buffer = TRB_BUFFER(priv_ep->trb_pool_dma +
+ 			((priv_req->end_trb + 1) * TRB_SIZE));
+ 		link_trb->control = (link_trb->control & TRB_CYCLE) |
+diff --git a/drivers/usb/core/devio.c b/drivers/usb/core/devio.c
+index 6833c918abce..d93d94d7ff50 100644
+--- a/drivers/usb/core/devio.c
++++ b/drivers/usb/core/devio.c
+@@ -217,6 +217,7 @@ static int usbdev_mmap(struct file *file, struct vm_area_struct *vma)
+ {
+ 	struct usb_memory *usbm = NULL;
+ 	struct usb_dev_state *ps = file->private_data;
++	struct usb_hcd *hcd = bus_to_hcd(ps->dev->bus);
+ 	size_t size = vma->vm_end - vma->vm_start;
+ 	void *mem;
+ 	unsigned long flags;
+@@ -250,11 +251,19 @@ static int usbdev_mmap(struct file *file, struct vm_area_struct *vma)
+ 	usbm->vma_use_count = 1;
+ 	INIT_LIST_HEAD(&usbm->memlist);
+ 
+-	if (remap_pfn_range(vma, vma->vm_start,
+-			virt_to_phys(usbm->mem) >> PAGE_SHIFT,
+-			size, vma->vm_page_prot) < 0) {
+-		dec_usb_memory_use_count(usbm, &usbm->vma_use_count);
+-		return -EAGAIN;
++	if (hcd->localmem_pool || !hcd_uses_dma(hcd)) {
++		if (remap_pfn_range(vma, vma->vm_start,
++				    virt_to_phys(usbm->mem) >> PAGE_SHIFT,
++				    size, vma->vm_page_prot) < 0) {
++			dec_usb_memory_use_count(usbm, &usbm->vma_use_count);
++			return -EAGAIN;
++		}
++	} else {
++		if (dma_mmap_coherent(hcd->self.sysdev, vma, mem, dma_handle,
++				      size)) {
++			dec_usb_memory_use_count(usbm, &usbm->vma_use_count);
++			return -EAGAIN;
++		}
+ 	}
+ 
+ 	vma->vm_flags |= VM_IO;
+diff --git a/drivers/usb/core/hub.c b/drivers/usb/core/hub.c
+index 2b6565c06c23..fc748c731832 100644
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -39,6 +39,7 @@
+ 
+ #define USB_VENDOR_GENESYS_LOGIC		0x05e3
+ #define USB_VENDOR_SMSC				0x0424
++#define USB_PRODUCT_USB5534B			0x5534
+ #define HUB_QUIRK_CHECK_PORT_AUTOSUSPEND	0x01
+ #define HUB_QUIRK_DISABLE_AUTOSUSPEND		0x02
+ 
+@@ -5621,8 +5622,11 @@ out_hdev_lock:
+ }
+ 
+ static const struct usb_device_id hub_id_table[] = {
+-    { .match_flags = USB_DEVICE_ID_MATCH_VENDOR | USB_DEVICE_ID_MATCH_INT_CLASS,
++    { .match_flags = USB_DEVICE_ID_MATCH_VENDOR
++                   | USB_DEVICE_ID_MATCH_PRODUCT
++                   | USB_DEVICE_ID_MATCH_INT_CLASS,
+       .idVendor = USB_VENDOR_SMSC,
++      .idProduct = USB_PRODUCT_USB5534B,
+       .bInterfaceClass = USB_CLASS_HUB,
+       .driver_info = HUB_QUIRK_DISABLE_AUTOSUSPEND},
+     { .match_flags = USB_DEVICE_ID_MATCH_VENDOR
+diff --git a/drivers/usb/dwc3/gadget.c b/drivers/usb/dwc3/gadget.c
+index bc1cf6d0412a..7e9643d25b14 100644
+--- a/drivers/usb/dwc3/gadget.c
++++ b/drivers/usb/dwc3/gadget.c
+@@ -2483,9 +2483,6 @@ static int dwc3_gadget_ep_reclaim_trb_sg(struct dwc3_ep *dep,
+ 	for_each_sg(sg, s, pending, i) {
+ 		trb = &dep->trb_pool[dep->trb_dequeue];
+ 
+-		if (trb->ctrl & DWC3_TRB_CTRL_HWO)
+-			break;
+-
+ 		req->sg = sg_next(s);
+ 		req->num_pending_sgs--;
+ 
+diff --git a/drivers/usb/gadget/configfs.c b/drivers/usb/gadget/configfs.c
+index 32b637e3e1fa..6a9aa4413d64 100644
+--- a/drivers/usb/gadget/configfs.c
++++ b/drivers/usb/gadget/configfs.c
+@@ -260,6 +260,9 @@ static ssize_t gadget_dev_desc_UDC_store(struct config_item *item,
+ 	char *name;
+ 	int ret;
+ 
++	if (strlen(page) < len)
++		return -EOVERFLOW;
++
+ 	name = kstrdup(page, GFP_KERNEL);
+ 	if (!name)
+ 		return -ENOMEM;
+diff --git a/drivers/usb/gadget/legacy/audio.c b/drivers/usb/gadget/legacy/audio.c
+index dd81fd538cb8..a748ed0842e8 100644
+--- a/drivers/usb/gadget/legacy/audio.c
++++ b/drivers/usb/gadget/legacy/audio.c
+@@ -300,8 +300,10 @@ static int audio_bind(struct usb_composite_dev *cdev)
+ 		struct usb_descriptor_header *usb_desc;
+ 
+ 		usb_desc = usb_otg_descriptor_alloc(cdev->gadget);
+-		if (!usb_desc)
++		if (!usb_desc) {
++			status = -ENOMEM;
+ 			goto fail;
++		}
+ 		usb_otg_descriptor_init(cdev->gadget, usb_desc);
+ 		otg_desc[0] = usb_desc;
+ 		otg_desc[1] = NULL;
+diff --git a/drivers/usb/gadget/legacy/cdc2.c b/drivers/usb/gadget/legacy/cdc2.c
+index 8d7a556ece30..563363aba48f 100644
+--- a/drivers/usb/gadget/legacy/cdc2.c
++++ b/drivers/usb/gadget/legacy/cdc2.c
+@@ -179,8 +179,10 @@ static int cdc_bind(struct usb_composite_dev *cdev)
+ 		struct usb_descriptor_header *usb_desc;
+ 
+ 		usb_desc = usb_otg_descriptor_alloc(gadget);
+-		if (!usb_desc)
++		if (!usb_desc) {
++			status = -ENOMEM;
+ 			goto fail1;
++		}
+ 		usb_otg_descriptor_init(gadget, usb_desc);
+ 		otg_desc[0] = usb_desc;
+ 		otg_desc[1] = NULL;
+diff --git a/drivers/usb/gadget/legacy/ncm.c b/drivers/usb/gadget/legacy/ncm.c
+index c61e71ba7045..0f1b45e3abd1 100644
+--- a/drivers/usb/gadget/legacy/ncm.c
++++ b/drivers/usb/gadget/legacy/ncm.c
+@@ -156,8 +156,10 @@ static int gncm_bind(struct usb_composite_dev *cdev)
+ 		struct usb_descriptor_header *usb_desc;
+ 
+ 		usb_desc = usb_otg_descriptor_alloc(gadget);
+-		if (!usb_desc)
++		if (!usb_desc) {
++			status = -ENOMEM;
+ 			goto fail;
++		}
+ 		usb_otg_descriptor_init(gadget, usb_desc);
+ 		otg_desc[0] = usb_desc;
+ 		otg_desc[1] = NULL;
+diff --git a/drivers/usb/gadget/udc/net2272.c b/drivers/usb/gadget/udc/net2272.c
+index a8273b589456..5af0fe9c61d7 100644
+--- a/drivers/usb/gadget/udc/net2272.c
++++ b/drivers/usb/gadget/udc/net2272.c
+@@ -2647,6 +2647,8 @@ net2272_plat_probe(struct platform_device *pdev)
+  err_req:
+ 	release_mem_region(base, len);
+  err:
++	kfree(dev);
++
+ 	return ret;
+ }
+ 
+diff --git a/drivers/usb/gadget/udc/tegra-xudc.c b/drivers/usb/gadget/udc/tegra-xudc.c
+index 634c2c19a176..a22d190d00a0 100644
+--- a/drivers/usb/gadget/udc/tegra-xudc.c
++++ b/drivers/usb/gadget/udc/tegra-xudc.c
+@@ -3740,11 +3740,11 @@ static int __maybe_unused tegra_xudc_suspend(struct device *dev)
+ 
+ 	flush_work(&xudc->usb_role_sw_work);
+ 
+-	/* Forcibly disconnect before powergating. */
+-	tegra_xudc_device_mode_off(xudc);
+-
+-	if (!pm_runtime_status_suspended(dev))
++	if (!pm_runtime_status_suspended(dev)) {
++		/* Forcibly disconnect before powergating. */
++		tegra_xudc_device_mode_off(xudc);
+ 		tegra_xudc_powergate(xudc);
++	}
+ 
+ 	pm_runtime_disable(dev);
+ 
+diff --git a/drivers/usb/host/xhci-plat.c b/drivers/usb/host/xhci-plat.c
+index 315b4552693c..52c625c02341 100644
+--- a/drivers/usb/host/xhci-plat.c
++++ b/drivers/usb/host/xhci-plat.c
+@@ -363,6 +363,7 @@ static int xhci_plat_remove(struct platform_device *dev)
+ 	struct clk *reg_clk = xhci->reg_clk;
+ 	struct usb_hcd *shared_hcd = xhci->shared_hcd;
+ 
++	pm_runtime_get_sync(&dev->dev);
+ 	xhci->xhc_state |= XHCI_STATE_REMOVING;
+ 
+ 	usb_remove_hcd(shared_hcd);
+@@ -376,8 +377,9 @@ static int xhci_plat_remove(struct platform_device *dev)
+ 	clk_disable_unprepare(reg_clk);
+ 	usb_put_hcd(hcd);
+ 
+-	pm_runtime_set_suspended(&dev->dev);
+ 	pm_runtime_disable(&dev->dev);
++	pm_runtime_put_noidle(&dev->dev);
++	pm_runtime_set_suspended(&dev->dev);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/usb/host/xhci-ring.c b/drivers/usb/host/xhci-ring.c
+index 2fbc00c0a6e8..49f3f3ce7737 100644
+--- a/drivers/usb/host/xhci-ring.c
++++ b/drivers/usb/host/xhci-ring.c
+@@ -3425,8 +3425,8 @@ int xhci_queue_bulk_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
+ 			/* New sg entry */
+ 			--num_sgs;
+ 			sent_len -= block_len;
+-			if (num_sgs != 0) {
+-				sg = sg_next(sg);
++			sg = sg_next(sg);
++			if (num_sgs != 0 && sg) {
+ 				block_len = sg_dma_len(sg);
+ 				addr = (u64) sg_dma_address(sg);
+ 				addr += sent_len;
+diff --git a/fs/cachefiles/rdwr.c b/fs/cachefiles/rdwr.c
+index 1dc97f2d6201..d3d78176b23c 100644
+--- a/fs/cachefiles/rdwr.c
++++ b/fs/cachefiles/rdwr.c
+@@ -398,7 +398,7 @@ int cachefiles_read_or_alloc_page(struct fscache_retrieval *op,
+ 	struct inode *inode;
+ 	sector_t block;
+ 	unsigned shift;
+-	int ret;
++	int ret, ret2;
+ 
+ 	object = container_of(op->op.object,
+ 			      struct cachefiles_object, fscache);
+@@ -430,8 +430,8 @@ int cachefiles_read_or_alloc_page(struct fscache_retrieval *op,
+ 	block = page->index;
+ 	block <<= shift;
+ 
+-	ret = bmap(inode, &block);
+-	ASSERT(ret < 0);
++	ret2 = bmap(inode, &block);
++	ASSERT(ret2 == 0);
+ 
+ 	_debug("%llx -> %llx",
+ 	       (unsigned long long) (page->index << shift),
+@@ -739,8 +739,8 @@ int cachefiles_read_or_alloc_pages(struct fscache_retrieval *op,
+ 		block = page->index;
+ 		block <<= shift;
+ 
+-		ret = bmap(inode, &block);
+-		ASSERT(!ret);
++		ret2 = bmap(inode, &block);
++		ASSERT(ret2 == 0);
+ 
+ 		_debug("%llx -> %llx",
+ 		       (unsigned long long) (page->index << shift),
+diff --git a/fs/cifs/cifssmb.c b/fs/cifs/cifssmb.c
+index 6f6fb3606a5d..a4545aa04efc 100644
+--- a/fs/cifs/cifssmb.c
++++ b/fs/cifs/cifssmb.c
+@@ -2138,8 +2138,8 @@ cifs_writev_requeue(struct cifs_writedata *wdata)
+ 			}
+ 		}
+ 
++		kref_put(&wdata2->refcount, cifs_writedata_release);
+ 		if (rc) {
+-			kref_put(&wdata2->refcount, cifs_writedata_release);
+ 			if (is_retryable_error(rc))
+ 				continue;
+ 			i += nr_pages;
+diff --git a/fs/eventpoll.c b/fs/eventpoll.c
+index b0a097274cfe..f5a481089893 100644
+--- a/fs/eventpoll.c
++++ b/fs/eventpoll.c
+@@ -1857,34 +1857,33 @@ fetch_events:
+ 		 * event delivery.
+ 		 */
+ 		init_wait(&wait);
+-		write_lock_irq(&ep->lock);
+-		__add_wait_queue_exclusive(&ep->wq, &wait);
+-		write_unlock_irq(&ep->lock);
+ 
++		write_lock_irq(&ep->lock);
+ 		/*
+-		 * We don't want to sleep if the ep_poll_callback() sends us
+-		 * a wakeup in between. That's why we set the task state
+-		 * to TASK_INTERRUPTIBLE before doing the checks.
++		 * Barrierless variant, waitqueue_active() is called under
++		 * the same lock on wakeup ep_poll_callback() side, so it
++		 * is safe to avoid an explicit barrier.
+ 		 */
+-		set_current_state(TASK_INTERRUPTIBLE);
++		__set_current_state(TASK_INTERRUPTIBLE);
++
+ 		/*
+-		 * Always short-circuit for fatal signals to allow
+-		 * threads to make a timely exit without the chance of
+-		 * finding more events available and fetching
+-		 * repeatedly.
++		 * Do the final check under the lock. ep_scan_ready_list()
++		 * plays with two lists (->rdllist and ->ovflist) and there
++		 * is always a race when both lists are empty for short
++		 * period of time although events are pending, so lock is
++		 * important.
+ 		 */
+-		if (fatal_signal_pending(current)) {
+-			res = -EINTR;
+-			break;
++		eavail = ep_events_available(ep);
++		if (!eavail) {
++			if (signal_pending(current))
++				res = -EINTR;
++			else
++				__add_wait_queue_exclusive(&ep->wq, &wait);
+ 		}
++		write_unlock_irq(&ep->lock);
+ 
+-		eavail = ep_events_available(ep);
+-		if (eavail)
+-			break;
+-		if (signal_pending(current)) {
+-			res = -EINTR;
++		if (eavail || res)
+ 			break;
+-		}
+ 
+ 		if (!schedule_hrtimeout_range(to, slack, HRTIMER_MODE_ABS)) {
+ 			timed_out = 1;
+@@ -1905,6 +1904,15 @@ fetch_events:
+ 	}
+ 
+ send_events:
++	if (fatal_signal_pending(current)) {
++		/*
++		 * Always short-circuit for fatal signals to allow
++		 * threads to make a timely exit without the chance of
++		 * finding more events available and fetching
++		 * repeatedly.
++		 */
++		res = -EINTR;
++	}
+ 	/*
+ 	 * Try to transfer events to user space. In case we get 0 events and
+ 	 * there's still timeout left over, we go trying again in search of
+diff --git a/fs/exec.c b/fs/exec.c
+index a58625f27652..77603ceed51f 100644
+--- a/fs/exec.c
++++ b/fs/exec.c
+@@ -1277,6 +1277,8 @@ int flush_old_exec(struct linux_binprm * bprm)
+ 	 */
+ 	set_mm_exe_file(bprm->mm, bprm->file);
+ 
++	would_dump(bprm, bprm->file);
++
+ 	/*
+ 	 * Release all of the old mmap stuff
+ 	 */
+@@ -1820,8 +1822,6 @@ static int __do_execve_file(int fd, struct filename *filename,
+ 	if (retval < 0)
+ 		goto out;
+ 
+-	would_dump(bprm, bprm->file);
+-
+ 	retval = exec_binprm(bprm);
+ 	if (retval < 0)
+ 		goto out;
+diff --git a/fs/gfs2/bmap.c b/fs/gfs2/bmap.c
+index 08f6fbb3655e..31ed26435625 100644
+--- a/fs/gfs2/bmap.c
++++ b/fs/gfs2/bmap.c
+@@ -528,10 +528,12 @@ lower_metapath:
+ 
+ 		/* Advance in metadata tree. */
+ 		(mp->mp_list[hgt])++;
+-		if (mp->mp_list[hgt] >= sdp->sd_inptrs) {
+-			if (!hgt)
++		if (hgt) {
++			if (mp->mp_list[hgt] >= sdp->sd_inptrs)
++				goto lower_metapath;
++		} else {
++			if (mp->mp_list[hgt] >= sdp->sd_diptrs)
+ 				break;
+-			goto lower_metapath;
+ 		}
+ 
+ fill_up_metapath:
+@@ -876,10 +878,9 @@ static int gfs2_iomap_get(struct inode *inode, loff_t pos, loff_t length,
+ 					ret = -ENOENT;
+ 					goto unlock;
+ 				} else {
+-					/* report a hole */
+ 					iomap->offset = pos;
+ 					iomap->length = length;
+-					goto do_alloc;
++					goto hole_found;
+ 				}
+ 			}
+ 			iomap->length = size;
+@@ -933,8 +934,6 @@ unlock:
+ 	return ret;
+ 
+ do_alloc:
+-	iomap->addr = IOMAP_NULL_ADDR;
+-	iomap->type = IOMAP_HOLE;
+ 	if (flags & IOMAP_REPORT) {
+ 		if (pos >= size)
+ 			ret = -ENOENT;
+@@ -956,6 +955,9 @@ do_alloc:
+ 		if (pos < size && height == ip->i_height)
+ 			ret = gfs2_hole_size(inode, lblock, len, mp, iomap);
+ 	}
++hole_found:
++	iomap->addr = IOMAP_NULL_ADDR;
++	iomap->type = IOMAP_HOLE;
+ 	goto out;
+ }
+ 
+diff --git a/fs/gfs2/lops.c b/fs/gfs2/lops.c
+index c090d5ad3f22..3a020bdc358c 100644
+--- a/fs/gfs2/lops.c
++++ b/fs/gfs2/lops.c
+@@ -259,7 +259,7 @@ static struct bio *gfs2_log_alloc_bio(struct gfs2_sbd *sdp, u64 blkno,
+ 	struct super_block *sb = sdp->sd_vfs;
+ 	struct bio *bio = bio_alloc(GFP_NOIO, BIO_MAX_PAGES);
+ 
+-	bio->bi_iter.bi_sector = blkno << (sb->s_blocksize_bits - 9);
++	bio->bi_iter.bi_sector = blkno << sdp->sd_fsb2bb_shift;
+ 	bio_set_dev(bio, sb->s_bdev);
+ 	bio->bi_end_io = end_io;
+ 	bio->bi_private = sdp;
+@@ -505,7 +505,7 @@ int gfs2_find_jhead(struct gfs2_jdesc *jd, struct gfs2_log_header_host *head,
+ 	unsigned int bsize = sdp->sd_sb.sb_bsize, off;
+ 	unsigned int bsize_shift = sdp->sd_sb.sb_bsize_shift;
+ 	unsigned int shift = PAGE_SHIFT - bsize_shift;
+-	unsigned int readahead_blocks = BIO_MAX_PAGES << shift;
++	unsigned int max_bio_size = 2 * 1024 * 1024;
+ 	struct gfs2_journal_extent *je;
+ 	int sz, ret = 0;
+ 	struct bio *bio = NULL;
+@@ -533,12 +533,17 @@ int gfs2_find_jhead(struct gfs2_jdesc *jd, struct gfs2_log_header_host *head,
+ 				off = 0;
+ 			}
+ 
+-			if (!bio || (bio_chained && !off)) {
++			if (!bio || (bio_chained && !off) ||
++			    bio->bi_iter.bi_size >= max_bio_size) {
+ 				/* start new bio */
+ 			} else {
+-				sz = bio_add_page(bio, page, bsize, off);
+-				if (sz == bsize)
+-					goto block_added;
++				sector_t sector = dblock << sdp->sd_fsb2bb_shift;
++
++				if (bio_end_sector(bio) == sector) {
++					sz = bio_add_page(bio, page, bsize, off);
++					if (sz == bsize)
++						goto block_added;
++				}
+ 				if (off) {
+ 					unsigned int blocks =
+ 						(PAGE_SIZE - off) >> bsize_shift;
+@@ -564,7 +569,7 @@ block_added:
+ 			off += bsize;
+ 			if (off == PAGE_SIZE)
+ 				page = NULL;
+-			if (blocks_submitted < blocks_read + readahead_blocks) {
++			if (blocks_submitted < 2 * max_bio_size >> bsize_shift) {
+ 				/* Keep at least one bio in flight */
+ 				continue;
+ 			}
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 9690c845a3e4..832e042531bc 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -4258,7 +4258,7 @@ static int io_req_defer(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ 	int ret;
+ 
+ 	/* Still need defer if there is pending req in defer list. */
+-	if (!req_need_defer(req) && list_empty(&ctx->defer_list))
++	if (!req_need_defer(req) && list_empty_careful(&ctx->defer_list))
+ 		return 0;
+ 
+ 	if (!req->io && io_alloc_async_ctx(req))
+@@ -6451,7 +6451,7 @@ static void io_ring_ctx_wait_and_kill(struct io_ring_ctx *ctx)
+ 	 * it could cause shutdown to hang.
+ 	 */
+ 	while (ctx->sqo_thread && !wq_has_sleeper(&ctx->sqo_wait))
+-		cpu_relax();
++		cond_resched();
+ 
+ 	io_kill_timeouts(ctx);
+ 	io_poll_remove_all(ctx);
+diff --git a/fs/ioctl.c b/fs/ioctl.c
+index 282d45be6f45..5e80b40bc1b5 100644
+--- a/fs/ioctl.c
++++ b/fs/ioctl.c
+@@ -55,6 +55,7 @@ EXPORT_SYMBOL(vfs_ioctl);
+ static int ioctl_fibmap(struct file *filp, int __user *p)
+ {
+ 	struct inode *inode = file_inode(filp);
++	struct super_block *sb = inode->i_sb;
+ 	int error, ur_block;
+ 	sector_t block;
+ 
+@@ -71,6 +72,13 @@ static int ioctl_fibmap(struct file *filp, int __user *p)
+ 	block = ur_block;
+ 	error = bmap(inode, &block);
+ 
++	if (block > INT_MAX) {
++		error = -ERANGE;
++		pr_warn_ratelimited("[%s/%d] FS: %s File: %pD4 would truncate fibmap result\n",
++				    current->comm, task_pid_nr(current),
++				    sb->s_id, filp);
++	}
++
+ 	if (error)
+ 		ur_block = 0;
+ 	else
+diff --git a/fs/iomap/fiemap.c b/fs/iomap/fiemap.c
+index bccf305ea9ce..d55e8f491a5e 100644
+--- a/fs/iomap/fiemap.c
++++ b/fs/iomap/fiemap.c
+@@ -117,10 +117,7 @@ iomap_bmap_actor(struct inode *inode, loff_t pos, loff_t length,
+ 
+ 	if (iomap->type == IOMAP_MAPPED) {
+ 		addr = (pos - iomap->offset + iomap->addr) >> inode->i_blkbits;
+-		if (addr > INT_MAX)
+-			WARN(1, "would truncate bmap result\n");
+-		else
+-			*bno = addr;
++		*bno = addr;
+ 	}
+ 	return 0;
+ }
+diff --git a/fs/nfs/fscache.c b/fs/nfs/fscache.c
+index 1abf126c2df4..a60df88efc40 100644
+--- a/fs/nfs/fscache.c
++++ b/fs/nfs/fscache.c
+@@ -118,8 +118,6 @@ void nfs_fscache_get_super_cookie(struct super_block *sb, const char *uniq, int
+ 
+ 	nfss->fscache_key = NULL;
+ 	nfss->fscache = NULL;
+-	if (!(nfss->options & NFS_OPTION_FSCACHE))
+-		return;
+ 	if (!uniq) {
+ 		uniq = "";
+ 		ulen = 1;
+@@ -188,7 +186,8 @@ void nfs_fscache_get_super_cookie(struct super_block *sb, const char *uniq, int
+ 	/* create a cache index for looking up filehandles */
+ 	nfss->fscache = fscache_acquire_cookie(nfss->nfs_client->fscache,
+ 					       &nfs_fscache_super_index_def,
+-					       key, sizeof(*key) + ulen,
++					       &key->key,
++					       sizeof(key->key) + ulen,
+ 					       NULL, 0,
+ 					       nfss, 0, true);
+ 	dfprintk(FSCACHE, "NFS: get superblock cookie (0x%p/0x%p)\n",
+@@ -226,6 +225,19 @@ void nfs_fscache_release_super_cookie(struct super_block *sb)
+ 	}
+ }
+ 
++static void nfs_fscache_update_auxdata(struct nfs_fscache_inode_auxdata *auxdata,
++				  struct nfs_inode *nfsi)
++{
++	memset(auxdata, 0, sizeof(*auxdata));
++	auxdata->mtime_sec  = nfsi->vfs_inode.i_mtime.tv_sec;
++	auxdata->mtime_nsec = nfsi->vfs_inode.i_mtime.tv_nsec;
++	auxdata->ctime_sec  = nfsi->vfs_inode.i_ctime.tv_sec;
++	auxdata->ctime_nsec = nfsi->vfs_inode.i_ctime.tv_nsec;
++
++	if (NFS_SERVER(&nfsi->vfs_inode)->nfs_client->rpc_ops->version == 4)
++		auxdata->change_attr = inode_peek_iversion_raw(&nfsi->vfs_inode);
++}
++
+ /*
+  * Initialise the per-inode cache cookie pointer for an NFS inode.
+  */
+@@ -239,14 +251,7 @@ void nfs_fscache_init_inode(struct inode *inode)
+ 	if (!(nfss->fscache && S_ISREG(inode->i_mode)))
+ 		return;
+ 
+-	memset(&auxdata, 0, sizeof(auxdata));
+-	auxdata.mtime_sec  = nfsi->vfs_inode.i_mtime.tv_sec;
+-	auxdata.mtime_nsec = nfsi->vfs_inode.i_mtime.tv_nsec;
+-	auxdata.ctime_sec  = nfsi->vfs_inode.i_ctime.tv_sec;
+-	auxdata.ctime_nsec = nfsi->vfs_inode.i_ctime.tv_nsec;
+-
+-	if (NFS_SERVER(&nfsi->vfs_inode)->nfs_client->rpc_ops->version == 4)
+-		auxdata.change_attr = inode_peek_iversion_raw(&nfsi->vfs_inode);
++	nfs_fscache_update_auxdata(&auxdata, nfsi);
+ 
+ 	nfsi->fscache = fscache_acquire_cookie(NFS_SB(inode->i_sb)->fscache,
+ 					       &nfs_fscache_inode_object_def,
+@@ -266,11 +271,7 @@ void nfs_fscache_clear_inode(struct inode *inode)
+ 
+ 	dfprintk(FSCACHE, "NFS: clear cookie (0x%p/0x%p)\n", nfsi, cookie);
+ 
+-	memset(&auxdata, 0, sizeof(auxdata));
+-	auxdata.mtime_sec  = nfsi->vfs_inode.i_mtime.tv_sec;
+-	auxdata.mtime_nsec = nfsi->vfs_inode.i_mtime.tv_nsec;
+-	auxdata.ctime_sec  = nfsi->vfs_inode.i_ctime.tv_sec;
+-	auxdata.ctime_nsec = nfsi->vfs_inode.i_ctime.tv_nsec;
++	nfs_fscache_update_auxdata(&auxdata, nfsi);
+ 	fscache_relinquish_cookie(cookie, &auxdata, false);
+ 	nfsi->fscache = NULL;
+ }
+@@ -310,11 +311,7 @@ void nfs_fscache_open_file(struct inode *inode, struct file *filp)
+ 	if (!fscache_cookie_valid(cookie))
+ 		return;
+ 
+-	memset(&auxdata, 0, sizeof(auxdata));
+-	auxdata.mtime_sec  = nfsi->vfs_inode.i_mtime.tv_sec;
+-	auxdata.mtime_nsec = nfsi->vfs_inode.i_mtime.tv_nsec;
+-	auxdata.ctime_sec  = nfsi->vfs_inode.i_ctime.tv_sec;
+-	auxdata.ctime_nsec = nfsi->vfs_inode.i_ctime.tv_nsec;
++	nfs_fscache_update_auxdata(&auxdata, nfsi);
+ 
+ 	if (inode_is_open_for_write(inode)) {
+ 		dfprintk(FSCACHE, "NFS: nfsi 0x%p disabling cache\n", nfsi);
+diff --git a/fs/nfs/mount_clnt.c b/fs/nfs/mount_clnt.c
+index 35c8cb2d7637..dda5c3e65d8d 100644
+--- a/fs/nfs/mount_clnt.c
++++ b/fs/nfs/mount_clnt.c
+@@ -30,6 +30,7 @@
+ #define encode_dirpath_sz	(1 + XDR_QUADLEN(MNTPATHLEN))
+ #define MNT_status_sz		(1)
+ #define MNT_fhandle_sz		XDR_QUADLEN(NFS2_FHSIZE)
++#define MNT_fhandlev3_sz	XDR_QUADLEN(NFS3_FHSIZE)
+ #define MNT_authflav3_sz	(1 + NFS_MAX_SECFLAVORS)
+ 
+ /*
+@@ -37,7 +38,7 @@
+  */
+ #define MNT_enc_dirpath_sz	encode_dirpath_sz
+ #define MNT_dec_mountres_sz	(MNT_status_sz + MNT_fhandle_sz)
+-#define MNT_dec_mountres3_sz	(MNT_status_sz + MNT_fhandle_sz + \
++#define MNT_dec_mountres3_sz	(MNT_status_sz + MNT_fhandlev3_sz + \
+ 				 MNT_authflav3_sz)
+ 
+ /*
+diff --git a/fs/nfs/nfs4state.c b/fs/nfs/nfs4state.c
+index f7723d221945..459c7fb5d103 100644
+--- a/fs/nfs/nfs4state.c
++++ b/fs/nfs/nfs4state.c
+@@ -734,9 +734,9 @@ nfs4_get_open_state(struct inode *inode, struct nfs4_state_owner *owner)
+ 		state = new;
+ 		state->owner = owner;
+ 		atomic_inc(&owner->so_count);
+-		list_add_rcu(&state->inode_states, &nfsi->open_states);
+ 		ihold(inode);
+ 		state->inode = inode;
++		list_add_rcu(&state->inode_states, &nfsi->open_states);
+ 		spin_unlock(&inode->i_lock);
+ 		/* Note: The reclaim code dictates that we add stateless
+ 		 * and read-only stateids to the end of the list */
+diff --git a/fs/nfs/super.c b/fs/nfs/super.c
+index dada09b391c6..c0d5240b8a0a 100644
+--- a/fs/nfs/super.c
++++ b/fs/nfs/super.c
+@@ -1154,7 +1154,6 @@ static void nfs_get_cache_cookie(struct super_block *sb,
+ 			uniq = ctx->fscache_uniq;
+ 			ulen = strlen(ctx->fscache_uniq);
+ 		}
+-		return;
+ 	}
+ 
+ 	nfs_fscache_get_super_cookie(sb, uniq, ulen);
+diff --git a/fs/notify/fanotify/fanotify.c b/fs/notify/fanotify/fanotify.c
+index f5d30573f4a9..deb13f0a0f7d 100644
+--- a/fs/notify/fanotify/fanotify.c
++++ b/fs/notify/fanotify/fanotify.c
+@@ -171,6 +171,13 @@ static u32 fanotify_group_event_mask(struct fsnotify_group *group,
+ 		if (!fsnotify_iter_should_report_type(iter_info, type))
+ 			continue;
+ 		mark = iter_info->marks[type];
++		/*
++		 * If the event is on dir and this mark doesn't care about
++		 * events on dir, don't send it!
++		 */
++		if (event_mask & FS_ISDIR && !(mark->mask & FS_ISDIR))
++			continue;
++
+ 		/*
+ 		 * If the event is for a child and this mark doesn't care about
+ 		 * events on a child, don't send it!
+@@ -203,10 +210,6 @@ static u32 fanotify_group_event_mask(struct fsnotify_group *group,
+ 		user_mask &= ~FAN_ONDIR;
+ 	}
+ 
+-	if (event_mask & FS_ISDIR &&
+-	    !(marks_mask & FS_ISDIR & ~marks_ignored_mask))
+-		return 0;
+-
+ 	return test_mask & user_mask;
+ }
+ 
+diff --git a/include/linux/compiler.h b/include/linux/compiler.h
+index 034b0a644efc..448c91bf543b 100644
+--- a/include/linux/compiler.h
++++ b/include/linux/compiler.h
+@@ -356,4 +356,10 @@ static inline void *offset_to_ptr(const int *off)
+ /* &a[0] degrades to a pointer: a different type from an array */
+ #define __must_be_array(a)	BUILD_BUG_ON_ZERO(__same_type((a), &(a)[0]))
+ 
++/*
++ * This is needed in functions which generate the stack canary, see
++ * arch/x86/kernel/smpboot.c::start_secondary() for an example.
++ */
++#define prevent_tail_call_optimization()	mb()
++
+ #endif /* __LINUX_COMPILER_H */
+diff --git a/include/linux/fs.h b/include/linux/fs.h
+index abedbffe2c9e..872ee2131589 100644
+--- a/include/linux/fs.h
++++ b/include/linux/fs.h
+@@ -978,7 +978,7 @@ struct file_handle {
+ 	__u32 handle_bytes;
+ 	int handle_type;
+ 	/* file identifier */
+-	unsigned char f_handle[0];
++	unsigned char f_handle[];
+ };
+ 
+ static inline struct file *get_file(struct file *f)
+diff --git a/include/linux/ftrace.h b/include/linux/ftrace.h
+index db95244a62d4..ab4bd15cbcdb 100644
+--- a/include/linux/ftrace.h
++++ b/include/linux/ftrace.h
+@@ -210,6 +210,29 @@ struct ftrace_ops {
+ #endif
+ };
+ 
++extern struct ftrace_ops __rcu *ftrace_ops_list;
++extern struct ftrace_ops ftrace_list_end;
++
++/*
++ * Traverse the ftrace_global_list, invoking all entries.  The reason that we
++ * can use rcu_dereference_raw_check() is that elements removed from this list
++ * are simply leaked, so there is no need to interact with a grace-period
++ * mechanism.  The rcu_dereference_raw_check() calls are needed to handle
++ * concurrent insertions into the ftrace_global_list.
++ *
++ * Silly Alpha and silly pointer-speculation compiler optimizations!
++ */
++#define do_for_each_ftrace_op(op, list)			\
++	op = rcu_dereference_raw_check(list);			\
++	do
++
++/*
++ * Optimized for just a single item in the list (as that is the normal case).
++ */
++#define while_for_each_ftrace_op(op)				\
++	while (likely(op = rcu_dereference_raw_check((op)->next)) &&	\
++	       unlikely((op) != &ftrace_list_end))
++
+ /*
+  * Type of the current tracing.
+  */
+diff --git a/include/linux/host1x.h b/include/linux/host1x.h
+index 62d216ff1097..c230b4e70d75 100644
+--- a/include/linux/host1x.h
++++ b/include/linux/host1x.h
+@@ -17,9 +17,12 @@ enum host1x_class {
+ 	HOST1X_CLASS_GR3D = 0x60,
+ };
+ 
++struct host1x;
+ struct host1x_client;
+ struct iommu_group;
+ 
++u64 host1x_get_dma_mask(struct host1x *host1x);
++
+ /**
+  * struct host1x_client_ops - host1x client operations
+  * @init: host1x client initialization code
+diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
+index e9ba01336d4e..bc5a3621a9d7 100644
+--- a/include/linux/memcontrol.h
++++ b/include/linux/memcontrol.h
+@@ -783,6 +783,8 @@ static inline void memcg_memory_event(struct mem_cgroup *memcg,
+ 		atomic_long_inc(&memcg->memory_events[event]);
+ 		cgroup_file_notify(&memcg->events_file);
+ 
++		if (!cgroup_subsys_on_dfl(memory_cgrp_subsys))
++			break;
+ 		if (cgrp_dfl_root.flags & CGRP_ROOT_MEMORY_LOCAL_EVENTS)
+ 			break;
+ 	} while ((memcg = parent_mem_cgroup(memcg)) &&
+diff --git a/include/linux/pnp.h b/include/linux/pnp.h
+index 3b12fd28af78..fc4df3ccefc9 100644
+--- a/include/linux/pnp.h
++++ b/include/linux/pnp.h
+@@ -220,10 +220,8 @@ struct pnp_card {
+ #define global_to_pnp_card(n) list_entry(n, struct pnp_card, global_list)
+ #define protocol_to_pnp_card(n) list_entry(n, struct pnp_card, protocol_list)
+ #define to_pnp_card(n) container_of(n, struct pnp_card, dev)
+-#define pnp_for_each_card(card) \
+-	for((card) = global_to_pnp_card(pnp_cards.next); \
+-	(card) != global_to_pnp_card(&pnp_cards); \
+-	(card) = global_to_pnp_card((card)->global_list.next))
++#define pnp_for_each_card(card)	\
++	list_for_each_entry(card, &pnp_cards, global_list)
+ 
+ struct pnp_card_link {
+ 	struct pnp_card *card;
+@@ -276,14 +274,9 @@ struct pnp_dev {
+ #define card_to_pnp_dev(n) list_entry(n, struct pnp_dev, card_list)
+ #define protocol_to_pnp_dev(n) list_entry(n, struct pnp_dev, protocol_list)
+ #define	to_pnp_dev(n) container_of(n, struct pnp_dev, dev)
+-#define pnp_for_each_dev(dev) \
+-	for((dev) = global_to_pnp_dev(pnp_global.next); \
+-	(dev) != global_to_pnp_dev(&pnp_global); \
+-	(dev) = global_to_pnp_dev((dev)->global_list.next))
+-#define card_for_each_dev(card,dev) \
+-	for((dev) = card_to_pnp_dev((card)->devices.next); \
+-	(dev) != card_to_pnp_dev(&(card)->devices); \
+-	(dev) = card_to_pnp_dev((dev)->card_list.next))
++#define pnp_for_each_dev(dev) list_for_each_entry(dev, &pnp_global, global_list)
++#define card_for_each_dev(card, dev)	\
++	list_for_each_entry(dev, &(card)->devices, card_list)
+ #define pnp_dev_name(dev) (dev)->name
+ 
+ static inline void *pnp_get_drvdata(struct pnp_dev *pdev)
+@@ -437,14 +430,10 @@ struct pnp_protocol {
+ };
+ 
+ #define to_pnp_protocol(n) list_entry(n, struct pnp_protocol, protocol_list)
+-#define protocol_for_each_card(protocol,card) \
+-	for((card) = protocol_to_pnp_card((protocol)->cards.next); \
+-	(card) != protocol_to_pnp_card(&(protocol)->cards); \
+-	(card) = protocol_to_pnp_card((card)->protocol_list.next))
+-#define protocol_for_each_dev(protocol,dev) \
+-	for((dev) = protocol_to_pnp_dev((protocol)->devices.next); \
+-	(dev) != protocol_to_pnp_dev(&(protocol)->devices); \
+-	(dev) = protocol_to_pnp_dev((dev)->protocol_list.next))
++#define protocol_for_each_card(protocol, card)	\
++	list_for_each_entry(card, &(protocol)->cards, protocol_list)
++#define protocol_for_each_dev(protocol, dev)	\
++	list_for_each_entry(dev, &(protocol)->devices, protocol_list)
+ 
+ extern struct bus_type pnp_bus_type;
+ 
+diff --git a/include/linux/skmsg.h b/include/linux/skmsg.h
+index 14d61bba0b79..71db17927a9d 100644
+--- a/include/linux/skmsg.h
++++ b/include/linux/skmsg.h
+@@ -187,6 +187,7 @@ static inline void sk_msg_xfer(struct sk_msg *dst, struct sk_msg *src,
+ 	dst->sg.data[which] = src->sg.data[which];
+ 	dst->sg.data[which].length  = size;
+ 	dst->sg.size		   += size;
++	src->sg.size		   -= size;
+ 	src->sg.data[which].length -= size;
+ 	src->sg.data[which].offset += size;
+ }
+diff --git a/include/linux/sunrpc/gss_api.h b/include/linux/sunrpc/gss_api.h
+index 48c1b1674cbf..bc07e51f20d1 100644
+--- a/include/linux/sunrpc/gss_api.h
++++ b/include/linux/sunrpc/gss_api.h
+@@ -21,6 +21,7 @@
+ struct gss_ctx {
+ 	struct gss_api_mech	*mech_type;
+ 	void			*internal_ctx_id;
++	unsigned int		slack, align;
+ };
+ 
+ #define GSS_C_NO_BUFFER		((struct xdr_netobj) 0)
+@@ -66,6 +67,7 @@ u32 gss_wrap(
+ u32 gss_unwrap(
+ 		struct gss_ctx		*ctx_id,
+ 		int			offset,
++		int			len,
+ 		struct xdr_buf		*inbuf);
+ u32 gss_delete_sec_context(
+ 		struct gss_ctx		**ctx_id);
+@@ -126,6 +128,7 @@ struct gss_api_ops {
+ 	u32 (*gss_unwrap)(
+ 			struct gss_ctx		*ctx_id,
+ 			int			offset,
++			int			len,
+ 			struct xdr_buf		*buf);
+ 	void (*gss_delete_sec_context)(
+ 			void			*internal_ctx_id);
+diff --git a/include/linux/sunrpc/gss_krb5.h b/include/linux/sunrpc/gss_krb5.h
+index c1d77dd8ed41..e8f8ffe7448b 100644
+--- a/include/linux/sunrpc/gss_krb5.h
++++ b/include/linux/sunrpc/gss_krb5.h
+@@ -83,7 +83,7 @@ struct gss_krb5_enctype {
+ 	u32 (*encrypt_v2) (struct krb5_ctx *kctx, u32 offset,
+ 			   struct xdr_buf *buf,
+ 			   struct page **pages); /* v2 encryption function */
+-	u32 (*decrypt_v2) (struct krb5_ctx *kctx, u32 offset,
++	u32 (*decrypt_v2) (struct krb5_ctx *kctx, u32 offset, u32 len,
+ 			   struct xdr_buf *buf, u32 *headskip,
+ 			   u32 *tailskip);	/* v2 decryption function */
+ };
+@@ -255,7 +255,7 @@ gss_wrap_kerberos(struct gss_ctx *ctx_id, int offset,
+ 		struct xdr_buf *outbuf, struct page **pages);
+ 
+ u32
+-gss_unwrap_kerberos(struct gss_ctx *ctx_id, int offset,
++gss_unwrap_kerberos(struct gss_ctx *ctx_id, int offset, int len,
+ 		struct xdr_buf *buf);
+ 
+ 
+@@ -312,7 +312,7 @@ gss_krb5_aes_encrypt(struct krb5_ctx *kctx, u32 offset,
+ 		     struct page **pages);
+ 
+ u32
+-gss_krb5_aes_decrypt(struct krb5_ctx *kctx, u32 offset,
++gss_krb5_aes_decrypt(struct krb5_ctx *kctx, u32 offset, u32 len,
+ 		     struct xdr_buf *buf, u32 *plainoffset,
+ 		     u32 *plainlen);
+ 
+diff --git a/include/linux/sunrpc/xdr.h b/include/linux/sunrpc/xdr.h
+index b41f34977995..ae2b1449dc09 100644
+--- a/include/linux/sunrpc/xdr.h
++++ b/include/linux/sunrpc/xdr.h
+@@ -184,6 +184,7 @@ xdr_adjust_iovec(struct kvec *iov, __be32 *p)
+ extern void xdr_shift_buf(struct xdr_buf *, size_t);
+ extern void xdr_buf_from_iov(struct kvec *, struct xdr_buf *);
+ extern int xdr_buf_subsegment(struct xdr_buf *, struct xdr_buf *, unsigned int, unsigned int);
++extern void xdr_buf_trim(struct xdr_buf *, unsigned int);
+ extern int xdr_buf_read_mic(struct xdr_buf *, struct xdr_netobj *, unsigned int);
+ extern int read_bytes_from_xdr_buf(struct xdr_buf *, unsigned int, void *, unsigned int);
+ extern int write_bytes_to_xdr_buf(struct xdr_buf *, unsigned int, void *, unsigned int);
+diff --git a/include/linux/tty.h b/include/linux/tty.h
+index bd5fe0e907e8..a99e9b8e4e31 100644
+--- a/include/linux/tty.h
++++ b/include/linux/tty.h
+@@ -66,7 +66,7 @@ struct tty_buffer {
+ 	int read;
+ 	int flags;
+ 	/* Data points here */
+-	unsigned long data[0];
++	unsigned long data[];
+ };
+ 
+ /* Values for .flags field of tty_buffer */
+diff --git a/include/net/netfilter/nf_conntrack.h b/include/net/netfilter/nf_conntrack.h
+index 9f551f3b69c6..90690e37a56f 100644
+--- a/include/net/netfilter/nf_conntrack.h
++++ b/include/net/netfilter/nf_conntrack.h
+@@ -87,7 +87,7 @@ struct nf_conn {
+ 	struct hlist_node	nat_bysource;
+ #endif
+ 	/* all members below initialized via memset */
+-	u8 __nfct_init_offset[0];
++	struct { } __nfct_init_offset;
+ 
+ 	/* If we were expected by an expectation, this will be it */
+ 	struct nf_conn *master;
+diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
+index c30f914867e6..f1f8acb14b67 100644
+--- a/include/net/sch_generic.h
++++ b/include/net/sch_generic.h
+@@ -407,6 +407,7 @@ struct tcf_block {
+ 	struct mutex lock;
+ 	struct list_head chain_list;
+ 	u32 index; /* block index for shared blocks */
++	u32 classid; /* which class this block belongs to */
+ 	refcount_t refcnt;
+ 	struct net *net;
+ 	struct Qdisc *q;
+diff --git a/include/net/tcp.h b/include/net/tcp.h
+index 2edb73c27962..00a57766e16e 100644
+--- a/include/net/tcp.h
++++ b/include/net/tcp.h
+@@ -1421,6 +1421,19 @@ static inline int tcp_full_space(const struct sock *sk)
+ 	return tcp_win_from_space(sk, READ_ONCE(sk->sk_rcvbuf));
+ }
+ 
++/* We provision sk_rcvbuf around 200% of sk_rcvlowat.
++ * If 87.5 % (7/8) of the space has been consumed, we want to override
++ * SO_RCVLOWAT constraint, since we are receiving skbs with too small
++ * len/truesize ratio.
++ */
++static inline bool tcp_rmem_pressure(const struct sock *sk)
++{
++	int rcvbuf = READ_ONCE(sk->sk_rcvbuf);
++	int threshold = rcvbuf - (rcvbuf >> 3);
++
++	return atomic_read(&sk->sk_rmem_alloc) > threshold;
++}
++
+ extern void tcp_openreq_init_rwin(struct request_sock *req,
+ 				  const struct sock *sk_listener,
+ 				  const struct dst_entry *dst);
+diff --git a/include/soc/mscc/ocelot.h b/include/soc/mscc/ocelot.h
+index f8e1955c86f1..7b5382e10bd2 100644
+--- a/include/soc/mscc/ocelot.h
++++ b/include/soc/mscc/ocelot.h
+@@ -437,6 +437,7 @@ struct ocelot {
+ 	unsigned int			num_stats;
+ 
+ 	int				shared_queue_sz;
++	int				num_mact_rows;
+ 
+ 	struct net_device		*hw_bridge_dev;
+ 	u16				bridge_mask;
+diff --git a/include/sound/rawmidi.h b/include/sound/rawmidi.h
+index a36b7227a15a..334842daa904 100644
+--- a/include/sound/rawmidi.h
++++ b/include/sound/rawmidi.h
+@@ -61,6 +61,7 @@ struct snd_rawmidi_runtime {
+ 	size_t avail_min;	/* min avail for wakeup */
+ 	size_t avail;		/* max used buffer for wakeup */
+ 	size_t xruns;		/* over/underruns counter */
++	int buffer_ref;		/* buffer reference count */
+ 	/* misc */
+ 	spinlock_t lock;
+ 	wait_queue_head_t sleep;
+diff --git a/include/trace/events/rpcrdma.h b/include/trace/events/rpcrdma.h
+index fa14adf24235..43158151821c 100644
+--- a/include/trace/events/rpcrdma.h
++++ b/include/trace/events/rpcrdma.h
+@@ -721,11 +721,10 @@ TRACE_EVENT(xprtrdma_prepsend_failed,
+ 
+ TRACE_EVENT(xprtrdma_post_send,
+ 	TP_PROTO(
+-		const struct rpcrdma_req *req,
+-		int status
++		const struct rpcrdma_req *req
+ 	),
+ 
+-	TP_ARGS(req, status),
++	TP_ARGS(req),
+ 
+ 	TP_STRUCT__entry(
+ 		__field(const void *, req)
+@@ -734,7 +733,6 @@ TRACE_EVENT(xprtrdma_post_send,
+ 		__field(unsigned int, client_id)
+ 		__field(int, num_sge)
+ 		__field(int, signaled)
+-		__field(int, status)
+ 	),
+ 
+ 	TP_fast_assign(
+@@ -747,15 +745,13 @@ TRACE_EVENT(xprtrdma_post_send,
+ 		__entry->sc = req->rl_sendctx;
+ 		__entry->num_sge = req->rl_wr.num_sge;
+ 		__entry->signaled = req->rl_wr.send_flags & IB_SEND_SIGNALED;
+-		__entry->status = status;
+ 	),
+ 
+-	TP_printk("task:%u@%u req=%p sc=%p (%d SGE%s) %sstatus=%d",
++	TP_printk("task:%u@%u req=%p sc=%p (%d SGE%s) %s",
+ 		__entry->task_id, __entry->client_id,
+ 		__entry->req, __entry->sc, __entry->num_sge,
+ 		(__entry->num_sge == 1 ? "" : "s"),
+-		(__entry->signaled ? "signaled " : ""),
+-		__entry->status
++		(__entry->signaled ? "signaled" : "")
+ 	)
+ );
+ 
+diff --git a/init/Kconfig b/init/Kconfig
+index 4f717bfdbfe2..ef59c5c36cdb 100644
+--- a/init/Kconfig
++++ b/init/Kconfig
+@@ -36,22 +36,6 @@ config TOOLS_SUPPORT_RELR
+ config CC_HAS_ASM_INLINE
+ 	def_bool $(success,echo 'void foo(void) { asm inline (""); }' | $(CC) -x c - -c -o /dev/null)
+ 
+-config CC_HAS_WARN_MAYBE_UNINITIALIZED
+-	def_bool $(cc-option,-Wmaybe-uninitialized)
+-	help
+-	  GCC >= 4.7 supports this option.
+-
+-config CC_DISABLE_WARN_MAYBE_UNINITIALIZED
+-	bool
+-	depends on CC_HAS_WARN_MAYBE_UNINITIALIZED
+-	default CC_IS_GCC && GCC_VERSION < 40900  # unreliable for GCC < 4.9
+-	help
+-	  GCC's -Wmaybe-uninitialized is not reliable by definition.
+-	  Lots of false positive warnings are produced in some cases.
+-
+-	  If this option is enabled, -Wno-maybe-uninitialzed is passed
+-	  to the compiler to suppress maybe-uninitialized warnings.
+-
+ config CONSTRUCTORS
+ 	bool
+ 	depends on !UML
+@@ -1249,14 +1233,12 @@ config CC_OPTIMIZE_FOR_PERFORMANCE
+ config CC_OPTIMIZE_FOR_PERFORMANCE_O3
+ 	bool "Optimize more for performance (-O3)"
+ 	depends on ARC
+-	imply CC_DISABLE_WARN_MAYBE_UNINITIALIZED  # avoid false positives
+ 	help
+ 	  Choosing this option will pass "-O3" to your compiler to optimize
+ 	  the kernel yet more for performance.
+ 
+ config CC_OPTIMIZE_FOR_SIZE
+ 	bool "Optimize for size (-Os)"
+-	imply CC_DISABLE_WARN_MAYBE_UNINITIALIZED  # avoid false positives
+ 	help
+ 	  Choosing this option will pass "-Os" to your compiler resulting
+ 	  in a smaller kernel.
+diff --git a/init/initramfs.c b/init/initramfs.c
+index 8ec1be4d7d51..7a38012e1af7 100644
+--- a/init/initramfs.c
++++ b/init/initramfs.c
+@@ -542,7 +542,7 @@ void __weak free_initrd_mem(unsigned long start, unsigned long end)
+ }
+ 
+ #ifdef CONFIG_KEXEC_CORE
+-static bool kexec_free_initrd(void)
++static bool __init kexec_free_initrd(void)
+ {
+ 	unsigned long crashk_start = (unsigned long)__va(crashk_res.start);
+ 	unsigned long crashk_end   = (unsigned long)__va(crashk_res.end);
+diff --git a/init/main.c b/init/main.c
+index 9c7948b3763a..6bcad75d60ad 100644
+--- a/init/main.c
++++ b/init/main.c
+@@ -257,6 +257,47 @@ static int __init loglevel(char *str)
+ 
+ early_param("loglevel", loglevel);
+ 
++#ifdef CONFIG_BLK_DEV_INITRD
++static void * __init get_boot_config_from_initrd(u32 *_size, u32 *_csum)
++{
++	u32 size, csum;
++	char *data;
++	u32 *hdr;
++
++	if (!initrd_end)
++		return NULL;
++
++	data = (char *)initrd_end - BOOTCONFIG_MAGIC_LEN;
++	if (memcmp(data, BOOTCONFIG_MAGIC, BOOTCONFIG_MAGIC_LEN))
++		return NULL;
++
++	hdr = (u32 *)(data - 8);
++	size = hdr[0];
++	csum = hdr[1];
++
++	data = ((void *)hdr) - size;
++	if ((unsigned long)data < initrd_start) {
++		pr_err("bootconfig size %d is greater than initrd size %ld\n",
++			size, initrd_end - initrd_start);
++		return NULL;
++	}
++
++	/* Remove bootconfig from initramfs/initrd */
++	initrd_end = (unsigned long)data;
++	if (_size)
++		*_size = size;
++	if (_csum)
++		*_csum = csum;
++
++	return data;
++}
++#else
++static void * __init get_boot_config_from_initrd(u32 *_size, u32 *_csum)
++{
++	return NULL;
++}
++#endif
++
+ #ifdef CONFIG_BOOT_CONFIG
+ 
+ char xbc_namebuf[XBC_KEYLEN_MAX] __initdata;
+@@ -355,9 +396,11 @@ static void __init setup_boot_config(const char *cmdline)
+ 	static char tmp_cmdline[COMMAND_LINE_SIZE] __initdata;
+ 	u32 size, csum;
+ 	char *data, *copy;
+-	u32 *hdr;
+ 	int ret;
+ 
++	/* Cut out the bootconfig data even if we have no bootconfig option */
++	data = get_boot_config_from_initrd(&size, &csum);
++
+ 	strlcpy(tmp_cmdline, boot_command_line, COMMAND_LINE_SIZE);
+ 	parse_args("bootconfig", tmp_cmdline, NULL, 0, 0, 0, NULL,
+ 		   bootconfig_params);
+@@ -365,16 +408,10 @@ static void __init setup_boot_config(const char *cmdline)
+ 	if (!bootconfig_found)
+ 		return;
+ 
+-	if (!initrd_end)
+-		goto not_found;
+-
+-	data = (char *)initrd_end - BOOTCONFIG_MAGIC_LEN;
+-	if (memcmp(data, BOOTCONFIG_MAGIC, BOOTCONFIG_MAGIC_LEN))
+-		goto not_found;
+-
+-	hdr = (u32 *)(data - 8);
+-	size = hdr[0];
+-	csum = hdr[1];
++	if (!data) {
++		pr_err("'bootconfig' found on command line, but no bootconfig found\n");
++		return;
++	}
+ 
+ 	if (size >= XBC_DATA_MAX) {
+ 		pr_err("bootconfig size %d greater than max size %d\n",
+@@ -382,10 +419,6 @@ static void __init setup_boot_config(const char *cmdline)
+ 		return;
+ 	}
+ 
+-	data = ((void *)hdr) - size;
+-	if ((unsigned long)data < initrd_start)
+-		goto not_found;
+-
+ 	if (boot_config_checksum((unsigned char *)data, size) != csum) {
+ 		pr_err("bootconfig checksum failed\n");
+ 		return;
+@@ -411,11 +444,15 @@ static void __init setup_boot_config(const char *cmdline)
+ 		extra_init_args = xbc_make_cmdline("init");
+ 	}
+ 	return;
+-not_found:
+-	pr_err("'bootconfig' found on command line, but no bootconfig found\n");
+ }
++
+ #else
+-#define setup_boot_config(cmdline)	do { } while (0)
++
++static void __init setup_boot_config(const char *cmdline)
++{
++	/* Remove bootconfig data from initrd */
++	get_boot_config_from_initrd(NULL, NULL);
++}
+ 
+ static int __init warn_bootconfig(char *str)
+ {
+@@ -995,6 +1032,8 @@ asmlinkage __visible void __init start_kernel(void)
+ 
+ 	/* Do the rest non-__init'ed, we're now alive */
+ 	arch_call_rest_init();
++
++	prevent_tail_call_optimization();
+ }
+ 
+ /* Call all constructor functions linked into the kernel. */
+diff --git a/ipc/util.c b/ipc/util.c
+index 2d70f25f64b8..c4a67982ec00 100644
+--- a/ipc/util.c
++++ b/ipc/util.c
+@@ -764,21 +764,21 @@ static struct kern_ipc_perm *sysvipc_find_ipc(struct ipc_ids *ids, loff_t pos,
+ 			total++;
+ 	}
+ 
+-	*new_pos = pos + 1;
++	ipc = NULL;
+ 	if (total >= ids->in_use)
+-		return NULL;
++		goto out;
+ 
+ 	for (; pos < ipc_mni; pos++) {
+ 		ipc = idr_find(&ids->ipcs_idr, pos);
+ 		if (ipc != NULL) {
+ 			rcu_read_lock();
+ 			ipc_lock_object(ipc);
+-			return ipc;
++			break;
+ 		}
+ 	}
+-
+-	/* Out of range - return NULL to terminate iteration */
+-	return NULL;
++out:
++	*new_pos = pos + 1;
++	return ipc;
+ }
+ 
+ static void *sysvipc_proc_next(struct seq_file *s, void *it, loff_t *pos)
+diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c
+index 95d77770353c..1d6120fd5ba6 100644
+--- a/kernel/bpf/arraymap.c
++++ b/kernel/bpf/arraymap.c
+@@ -486,7 +486,12 @@ static int array_map_mmap(struct bpf_map *map, struct vm_area_struct *vma)
+ 	if (!(map->map_flags & BPF_F_MMAPABLE))
+ 		return -EINVAL;
+ 
+-	return remap_vmalloc_range(vma, array_map_vmalloc_addr(array), pgoff);
++	if (vma->vm_pgoff * PAGE_SIZE + (vma->vm_end - vma->vm_start) >
++	    PAGE_ALIGN((u64)array->map.max_entries * array->elem_size))
++		return -EINVAL;
++
++	return remap_vmalloc_range(vma, array_map_vmalloc_addr(array),
++				   vma->vm_pgoff + pgoff);
+ }
+ 
+ const struct bpf_map_ops array_map_ops = {
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index 3b92aea18ae7..e04ea4c8f935 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -1480,8 +1480,10 @@ static int map_lookup_and_delete_elem(union bpf_attr *attr)
+ 	if (err)
+ 		goto free_value;
+ 
+-	if (copy_to_user(uvalue, value, value_size) != 0)
++	if (copy_to_user(uvalue, value, value_size) != 0) {
++		err = -EFAULT;
+ 		goto free_value;
++	}
+ 
+ 	err = 0;
+ 
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index 1c53ccbd5b5d..c1bb5be530e9 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -6498,6 +6498,22 @@ static int check_return_code(struct bpf_verifier_env *env)
+ 			return 0;
+ 		range = tnum_const(0);
+ 		break;
++	case BPF_PROG_TYPE_TRACING:
++		switch (env->prog->expected_attach_type) {
++		case BPF_TRACE_FENTRY:
++		case BPF_TRACE_FEXIT:
++			range = tnum_const(0);
++			break;
++		case BPF_TRACE_RAW_TP:
++			return 0;
++		default:
++			return -ENOTSUPP;
++		}
++		break;
++	case BPF_PROG_TYPE_EXT:
++		/* freplace program can return anything as its return value
++		 * depends on the to-be-replaced kernel func or bpf program.
++		 */
+ 	default:
+ 		return 0;
+ 	}
+diff --git a/kernel/fork.c b/kernel/fork.c
+index d90af13431c7..c9ba2b7bfef9 100644
+--- a/kernel/fork.c
++++ b/kernel/fork.c
+@@ -2486,11 +2486,11 @@ long do_fork(unsigned long clone_flags,
+ 	      int __user *child_tidptr)
+ {
+ 	struct kernel_clone_args args = {
+-		.flags		= (clone_flags & ~CSIGNAL),
++		.flags		= (lower_32_bits(clone_flags) & ~CSIGNAL),
+ 		.pidfd		= parent_tidptr,
+ 		.child_tid	= child_tidptr,
+ 		.parent_tid	= parent_tidptr,
+-		.exit_signal	= (clone_flags & CSIGNAL),
++		.exit_signal	= (lower_32_bits(clone_flags) & CSIGNAL),
+ 		.stack		= stack_start,
+ 		.stack_size	= stack_size,
+ 	};
+@@ -2508,8 +2508,9 @@ long do_fork(unsigned long clone_flags,
+ pid_t kernel_thread(int (*fn)(void *), void *arg, unsigned long flags)
+ {
+ 	struct kernel_clone_args args = {
+-		.flags		= ((flags | CLONE_VM | CLONE_UNTRACED) & ~CSIGNAL),
+-		.exit_signal	= (flags & CSIGNAL),
++		.flags		= ((lower_32_bits(flags) | CLONE_VM |
++				    CLONE_UNTRACED) & ~CSIGNAL),
++		.exit_signal	= (lower_32_bits(flags) & CSIGNAL),
+ 		.stack		= (unsigned long)fn,
+ 		.stack_size	= (unsigned long)arg,
+ 	};
+@@ -2570,11 +2571,11 @@ SYSCALL_DEFINE5(clone, unsigned long, clone_flags, unsigned long, newsp,
+ #endif
+ {
+ 	struct kernel_clone_args args = {
+-		.flags		= (clone_flags & ~CSIGNAL),
++		.flags		= (lower_32_bits(clone_flags) & ~CSIGNAL),
+ 		.pidfd		= parent_tidptr,
+ 		.child_tid	= child_tidptr,
+ 		.parent_tid	= parent_tidptr,
+-		.exit_signal	= (clone_flags & CSIGNAL),
++		.exit_signal	= (lower_32_bits(clone_flags) & CSIGNAL),
+ 		.stack		= newsp,
+ 		.tls		= tls,
+ 	};
+diff --git a/kernel/trace/Kconfig b/kernel/trace/Kconfig
+index 402eef84c859..743647005f64 100644
+--- a/kernel/trace/Kconfig
++++ b/kernel/trace/Kconfig
+@@ -466,7 +466,6 @@ config PROFILE_ANNOTATED_BRANCHES
+ config PROFILE_ALL_BRANCHES
+ 	bool "Profile all if conditionals" if !FORTIFY_SOURCE
+ 	select TRACE_BRANCH_PROFILING
+-	imply CC_DISABLE_WARN_MAYBE_UNINITIALIZED  # avoid false positives
+ 	help
+ 	  This tracer profiles all branch conditions. Every if ()
+ 	  taken in the kernel is recorded whether it hit or miss.
+diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
+index 68250d433bd7..b899a2d7e900 100644
+--- a/kernel/trace/bpf_trace.c
++++ b/kernel/trace/bpf_trace.c
+@@ -325,17 +325,15 @@ static const struct bpf_func_proto *bpf_get_probe_write_proto(void)
+ 
+ /*
+  * Only limited trace_printk() conversion specifiers allowed:
+- * %d %i %u %x %ld %li %lu %lx %lld %lli %llu %llx %p %s
++ * %d %i %u %x %ld %li %lu %lx %lld %lli %llu %llx %p %pks %pus %s
+  */
+ BPF_CALL_5(bpf_trace_printk, char *, fmt, u32, fmt_size, u64, arg1,
+ 	   u64, arg2, u64, arg3)
+ {
++	int i, mod[3] = {}, fmt_cnt = 0;
++	char buf[64], fmt_ptype;
++	void *unsafe_ptr = NULL;
+ 	bool str_seen = false;
+-	int mod[3] = {};
+-	int fmt_cnt = 0;
+-	u64 unsafe_addr;
+-	char buf[64];
+-	int i;
+ 
+ 	/*
+ 	 * bpf_check()->check_func_arg()->check_stack_boundary()
+@@ -361,40 +359,71 @@ BPF_CALL_5(bpf_trace_printk, char *, fmt, u32, fmt_size, u64, arg1,
+ 		if (fmt[i] == 'l') {
+ 			mod[fmt_cnt]++;
+ 			i++;
+-		} else if (fmt[i] == 'p' || fmt[i] == 's') {
++		} else if (fmt[i] == 'p') {
+ 			mod[fmt_cnt]++;
++			if ((fmt[i + 1] == 'k' ||
++			     fmt[i + 1] == 'u') &&
++			    fmt[i + 2] == 's') {
++				fmt_ptype = fmt[i + 1];
++				i += 2;
++				goto fmt_str;
++			}
++
+ 			/* disallow any further format extensions */
+ 			if (fmt[i + 1] != 0 &&
+ 			    !isspace(fmt[i + 1]) &&
+ 			    !ispunct(fmt[i + 1]))
+ 				return -EINVAL;
+-			fmt_cnt++;
+-			if (fmt[i] == 's') {
+-				if (str_seen)
+-					/* allow only one '%s' per fmt string */
+-					return -EINVAL;
+-				str_seen = true;
+-
+-				switch (fmt_cnt) {
+-				case 1:
+-					unsafe_addr = arg1;
+-					arg1 = (long) buf;
+-					break;
+-				case 2:
+-					unsafe_addr = arg2;
+-					arg2 = (long) buf;
+-					break;
+-				case 3:
+-					unsafe_addr = arg3;
+-					arg3 = (long) buf;
+-					break;
+-				}
+-				buf[0] = 0;
+-				strncpy_from_unsafe(buf,
+-						    (void *) (long) unsafe_addr,
++
++			goto fmt_next;
++		} else if (fmt[i] == 's') {
++			mod[fmt_cnt]++;
++			fmt_ptype = fmt[i];
++fmt_str:
++			if (str_seen)
++				/* allow only one '%s' per fmt string */
++				return -EINVAL;
++			str_seen = true;
++
++			if (fmt[i + 1] != 0 &&
++			    !isspace(fmt[i + 1]) &&
++			    !ispunct(fmt[i + 1]))
++				return -EINVAL;
++
++			switch (fmt_cnt) {
++			case 0:
++				unsafe_ptr = (void *)(long)arg1;
++				arg1 = (long)buf;
++				break;
++			case 1:
++				unsafe_ptr = (void *)(long)arg2;
++				arg2 = (long)buf;
++				break;
++			case 2:
++				unsafe_ptr = (void *)(long)arg3;
++				arg3 = (long)buf;
++				break;
++			}
++
++			buf[0] = 0;
++			switch (fmt_ptype) {
++			case 's':
++#ifdef CONFIG_ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE
++				strncpy_from_unsafe(buf, unsafe_ptr,
+ 						    sizeof(buf));
++				break;
++#endif
++			case 'k':
++				strncpy_from_unsafe_strict(buf, unsafe_ptr,
++							   sizeof(buf));
++				break;
++			case 'u':
++				strncpy_from_unsafe_user(buf,
++					(__force void __user *)unsafe_ptr,
++							 sizeof(buf));
++				break;
+ 			}
+-			continue;
++			goto fmt_next;
+ 		}
+ 
+ 		if (fmt[i] == 'l') {
+@@ -405,6 +434,7 @@ BPF_CALL_5(bpf_trace_printk, char *, fmt, u32, fmt_size, u64, arg1,
+ 		if (fmt[i] != 'i' && fmt[i] != 'd' &&
+ 		    fmt[i] != 'u' && fmt[i] != 'x')
+ 			return -EINVAL;
++fmt_next:
+ 		fmt_cnt++;
+ 	}
+ 
+diff --git a/kernel/trace/ftrace_internal.h b/kernel/trace/ftrace_internal.h
+index 0456e0a3dab1..382775edf690 100644
+--- a/kernel/trace/ftrace_internal.h
++++ b/kernel/trace/ftrace_internal.h
+@@ -4,28 +4,6 @@
+ 
+ #ifdef CONFIG_FUNCTION_TRACER
+ 
+-/*
+- * Traverse the ftrace_global_list, invoking all entries.  The reason that we
+- * can use rcu_dereference_raw_check() is that elements removed from this list
+- * are simply leaked, so there is no need to interact with a grace-period
+- * mechanism.  The rcu_dereference_raw_check() calls are needed to handle
+- * concurrent insertions into the ftrace_global_list.
+- *
+- * Silly Alpha and silly pointer-speculation compiler optimizations!
+- */
+-#define do_for_each_ftrace_op(op, list)			\
+-	op = rcu_dereference_raw_check(list);			\
+-	do
+-
+-/*
+- * Optimized for just a single item in the list (as that is the normal case).
+- */
+-#define while_for_each_ftrace_op(op)				\
+-	while (likely(op = rcu_dereference_raw_check((op)->next)) &&	\
+-	       unlikely((op) != &ftrace_list_end))
+-
+-extern struct ftrace_ops __rcu *ftrace_ops_list;
+-extern struct ftrace_ops ftrace_list_end;
+ extern struct mutex ftrace_lock;
+ extern struct ftrace_ops global_ops;
+ 
+diff --git a/kernel/trace/preemptirq_delay_test.c b/kernel/trace/preemptirq_delay_test.c
+index c4c86de63cf9..312d1a0ca3b6 100644
+--- a/kernel/trace/preemptirq_delay_test.c
++++ b/kernel/trace/preemptirq_delay_test.c
+@@ -16,6 +16,7 @@
+ #include <linux/printk.h>
+ #include <linux/string.h>
+ #include <linux/sysfs.h>
++#include <linux/completion.h>
+ 
+ static ulong delay = 100;
+ static char test_mode[12] = "irq";
+@@ -28,6 +29,8 @@ MODULE_PARM_DESC(delay, "Period in microseconds (100 us default)");
+ MODULE_PARM_DESC(test_mode, "Mode of the test such as preempt, irq, or alternate (default irq)");
+ MODULE_PARM_DESC(burst_size, "The size of a burst (default 1)");
+ 
++static struct completion done;
++
+ #define MIN(x, y) ((x) < (y) ? (x) : (y))
+ 
+ static void busy_wait(ulong time)
+@@ -114,6 +117,8 @@ static int preemptirq_delay_run(void *data)
+ 	for (i = 0; i < s; i++)
+ 		(testfuncs[i])(i);
+ 
++	complete(&done);
++
+ 	set_current_state(TASK_INTERRUPTIBLE);
+ 	while (!kthread_should_stop()) {
+ 		schedule();
+@@ -128,15 +133,18 @@ static int preemptirq_delay_run(void *data)
+ static int preemptirq_run_test(void)
+ {
+ 	struct task_struct *task;
+-
+ 	char task_name[50];
+ 
++	init_completion(&done);
++
+ 	snprintf(task_name, sizeof(task_name), "%s_test", test_mode);
+ 	task =  kthread_run(preemptirq_delay_run, NULL, task_name);
+ 	if (IS_ERR(task))
+ 		return PTR_ERR(task);
+-	if (task)
++	if (task) {
++		wait_for_completion(&done);
+ 		kthread_stop(task);
++	}
+ 	return 0;
+ }
+ 
+diff --git a/kernel/umh.c b/kernel/umh.c
+index 11bf5eea474c..3474d6aa55d8 100644
+--- a/kernel/umh.c
++++ b/kernel/umh.c
+@@ -475,6 +475,12 @@ static void umh_clean_and_save_pid(struct subprocess_info *info)
+ {
+ 	struct umh_info *umh_info = info->data;
+ 
++	/* cleanup if umh_pipe_setup() was successful but exec failed */
++	if (info->pid && info->retval) {
++		fput(umh_info->pipe_to_umh);
++		fput(umh_info->pipe_from_umh);
++	}
++
+ 	argv_free(info->argv);
+ 	umh_info->pid = info->pid;
+ }
+diff --git a/lib/vsprintf.c b/lib/vsprintf.c
+index 7c488a1ce318..532b6606a18a 100644
+--- a/lib/vsprintf.c
++++ b/lib/vsprintf.c
+@@ -2168,6 +2168,10 @@ char *fwnode_string(char *buf, char *end, struct fwnode_handle *fwnode,
+  *		f full name
+  *		P node name, including a possible unit address
+  * - 'x' For printing the address. Equivalent to "%lx".
++ * - '[ku]s' For a BPF/tracing related format specifier, e.g. used out of
++ *           bpf_trace_printk() where [ku] prefix specifies either kernel (k)
++ *           or user (u) memory to probe, and:
++ *              s a string, equivalent to "%s" on direct vsnprintf() use
+  *
+  * ** When making changes please also update:
+  *	Documentation/core-api/printk-formats.rst
+@@ -2251,6 +2255,14 @@ char *pointer(const char *fmt, char *buf, char *end, void *ptr,
+ 		if (!IS_ERR(ptr))
+ 			break;
+ 		return err_ptr(buf, end, ptr, spec);
++	case 'u':
++	case 'k':
++		switch (fmt[1]) {
++		case 's':
++			return string(buf, end, ptr, spec);
++		default:
++			return error_string(buf, end, "(einval)", spec);
++		}
+ 	}
+ 
+ 	/* default is to _not_ leak addresses, hash before printing */
+diff --git a/mm/shmem.c b/mm/shmem.c
+index 7406f91f8a52..153d889e32d1 100644
+--- a/mm/shmem.c
++++ b/mm/shmem.c
+@@ -2184,7 +2184,11 @@ int shmem_lock(struct file *file, int lock, struct user_struct *user)
+ 	struct shmem_inode_info *info = SHMEM_I(inode);
+ 	int retval = -ENOMEM;
+ 
+-	spin_lock_irq(&info->lock);
++	/*
++	 * What serializes the accesses to info->flags?
++	 * ipc_lock_object() when called from shmctl_do_lock(),
++	 * no serialization needed when called from shm_destroy().
++	 */
+ 	if (lock && !(info->flags & VM_LOCKED)) {
+ 		if (!user_shm_lock(inode->i_size, user))
+ 			goto out_nomem;
+@@ -2199,7 +2203,6 @@ int shmem_lock(struct file *file, int lock, struct user_struct *user)
+ 	retval = 0;
+ 
+ out_nomem:
+-	spin_unlock_irq(&info->lock);
+ 	return retval;
+ }
+ 
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 77c154107b0d..c7047b40f569 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -8890,11 +8890,13 @@ static void netdev_sync_lower_features(struct net_device *upper,
+ 			netdev_dbg(upper, "Disabling feature %pNF on lower dev %s.\n",
+ 				   &feature, lower->name);
+ 			lower->wanted_features &= ~feature;
+-			netdev_update_features(lower);
++			__netdev_update_features(lower);
+ 
+ 			if (unlikely(lower->features & feature))
+ 				netdev_WARN(upper, "failed to disable %pNF on %s!\n",
+ 					    &feature, lower->name);
++			else
++				netdev_features_change(lower);
+ 		}
+ 	}
+ }
+diff --git a/net/core/drop_monitor.c b/net/core/drop_monitor.c
+index 31700e0c3928..04d8e8779384 100644
+--- a/net/core/drop_monitor.c
++++ b/net/core/drop_monitor.c
+@@ -212,6 +212,7 @@ static void sched_send_work(struct timer_list *t)
+ static void trace_drop_common(struct sk_buff *skb, void *location)
+ {
+ 	struct net_dm_alert_msg *msg;
++	struct net_dm_drop_point *point;
+ 	struct nlmsghdr *nlh;
+ 	struct nlattr *nla;
+ 	int i;
+@@ -230,11 +231,13 @@ static void trace_drop_common(struct sk_buff *skb, void *location)
+ 	nlh = (struct nlmsghdr *)dskb->data;
+ 	nla = genlmsg_data(nlmsg_data(nlh));
+ 	msg = nla_data(nla);
++	point = msg->points;
+ 	for (i = 0; i < msg->entries; i++) {
+-		if (!memcmp(&location, msg->points[i].pc, sizeof(void *))) {
+-			msg->points[i].count++;
++		if (!memcmp(&location, &point->pc, sizeof(void *))) {
++			point->count++;
+ 			goto out;
+ 		}
++		point++;
+ 	}
+ 	if (msg->entries == dm_hit_limit)
+ 		goto out;
+@@ -243,8 +246,8 @@ static void trace_drop_common(struct sk_buff *skb, void *location)
+ 	 */
+ 	__nla_reserve_nohdr(dskb, sizeof(struct net_dm_drop_point));
+ 	nla->nla_len += NLA_ALIGN(sizeof(struct net_dm_drop_point));
+-	memcpy(msg->points[msg->entries].pc, &location, sizeof(void *));
+-	msg->points[msg->entries].count = 1;
++	memcpy(point->pc, &location, sizeof(void *));
++	point->count = 1;
+ 	msg->entries++;
+ 
+ 	if (!timer_pending(&data->send_timer)) {
+diff --git a/net/core/filter.c b/net/core/filter.c
+index c180871e606d..083fbe92662e 100644
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -2590,8 +2590,8 @@ BPF_CALL_4(bpf_msg_pop_data, struct sk_msg *, msg, u32, start,
+ 			}
+ 			pop = 0;
+ 		} else if (pop >= sge->length - a) {
+-			sge->length = a;
+ 			pop -= (sge->length - a);
++			sge->length = a;
+ 		}
+ 	}
+ 
+diff --git a/net/core/netprio_cgroup.c b/net/core/netprio_cgroup.c
+index 8881dd943dd0..9bd4cab7d510 100644
+--- a/net/core/netprio_cgroup.c
++++ b/net/core/netprio_cgroup.c
+@@ -236,6 +236,8 @@ static void net_prio_attach(struct cgroup_taskset *tset)
+ 	struct task_struct *p;
+ 	struct cgroup_subsys_state *css;
+ 
++	cgroup_sk_alloc_disable();
++
+ 	cgroup_taskset_for_each(p, css, tset) {
+ 		void *v = (void *)(unsigned long)css->id;
+ 
+diff --git a/net/ipv4/cipso_ipv4.c b/net/ipv4/cipso_ipv4.c
+index 0bd10a1f477f..a23094b050f8 100644
+--- a/net/ipv4/cipso_ipv4.c
++++ b/net/ipv4/cipso_ipv4.c
+@@ -1258,7 +1258,8 @@ static int cipso_v4_parsetag_rbm(const struct cipso_v4_doi *doi_def,
+ 			return ret_val;
+ 		}
+ 
+-		secattr->flags |= NETLBL_SECATTR_MLS_CAT;
++		if (secattr->attr.mls.cat)
++			secattr->flags |= NETLBL_SECATTR_MLS_CAT;
+ 	}
+ 
+ 	return 0;
+@@ -1439,7 +1440,8 @@ static int cipso_v4_parsetag_rng(const struct cipso_v4_doi *doi_def,
+ 			return ret_val;
+ 		}
+ 
+-		secattr->flags |= NETLBL_SECATTR_MLS_CAT;
++		if (secattr->attr.mls.cat)
++			secattr->flags |= NETLBL_SECATTR_MLS_CAT;
+ 	}
+ 
+ 	return 0;
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index ebe7060d0fc9..ef6b70774fe1 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -915,7 +915,7 @@ void ip_rt_send_redirect(struct sk_buff *skb)
+ 	/* Check for load limit; set rate_last to the latest sent
+ 	 * redirect.
+ 	 */
+-	if (peer->rate_tokens == 0 ||
++	if (peer->n_redirects == 0 ||
+ 	    time_after(jiffies,
+ 		       (peer->rate_last +
+ 			(ip_rt_redirect_load << peer->n_redirects)))) {
+diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c
+index dc77c303e6f7..06aad5e09459 100644
+--- a/net/ipv4/tcp.c
++++ b/net/ipv4/tcp.c
+@@ -476,9 +476,17 @@ static void tcp_tx_timestamp(struct sock *sk, u16 tsflags)
+ static inline bool tcp_stream_is_readable(const struct tcp_sock *tp,
+ 					  int target, struct sock *sk)
+ {
+-	return (READ_ONCE(tp->rcv_nxt) - READ_ONCE(tp->copied_seq) >= target) ||
+-		(sk->sk_prot->stream_memory_read ?
+-		sk->sk_prot->stream_memory_read(sk) : false);
++	int avail = READ_ONCE(tp->rcv_nxt) - READ_ONCE(tp->copied_seq);
++
++	if (avail > 0) {
++		if (avail >= target)
++			return true;
++		if (tcp_rmem_pressure(sk))
++			return true;
++	}
++	if (sk->sk_prot->stream_memory_read)
++		return sk->sk_prot->stream_memory_read(sk);
++	return false;
+ }
+ 
+ /*
+@@ -1756,10 +1764,11 @@ static int tcp_zerocopy_receive(struct sock *sk,
+ 
+ 	down_read(&current->mm->mmap_sem);
+ 
+-	ret = -EINVAL;
+ 	vma = find_vma(current->mm, address);
+-	if (!vma || vma->vm_start > address || vma->vm_ops != &tcp_vm_ops)
+-		goto out;
++	if (!vma || vma->vm_start > address || vma->vm_ops != &tcp_vm_ops) {
++		up_read(&current->mm->mmap_sem);
++		return -EINVAL;
++	}
+ 	zc->length = min_t(unsigned long, zc->length, vma->vm_end - address);
+ 
+ 	tp = tcp_sk(sk);
+@@ -2154,13 +2163,15 @@ skip_copy:
+ 			tp->urg_data = 0;
+ 			tcp_fast_path_check(sk);
+ 		}
+-		if (used + offset < skb->len)
+-			continue;
+ 
+ 		if (TCP_SKB_CB(skb)->has_rxtstamp) {
+ 			tcp_update_recv_tstamps(skb, &tss);
+ 			cmsg_flags |= 2;
+ 		}
++
++		if (used + offset < skb->len)
++			continue;
++
+ 		if (TCP_SKB_CB(skb)->tcp_flags & TCPHDR_FIN)
+ 			goto found_fin_ok;
+ 		if (!(flags & MSG_PEEK))
+diff --git a/net/ipv4/tcp_bpf.c b/net/ipv4/tcp_bpf.c
+index 8a01428f80c1..69b025408390 100644
+--- a/net/ipv4/tcp_bpf.c
++++ b/net/ipv4/tcp_bpf.c
+@@ -121,14 +121,17 @@ int tcp_bpf_recvmsg(struct sock *sk, struct msghdr *msg, size_t len,
+ 	struct sk_psock *psock;
+ 	int copied, ret;
+ 
++	if (unlikely(flags & MSG_ERRQUEUE))
++		return inet_recv_error(sk, msg, len, addr_len);
++
+ 	psock = sk_psock_get(sk);
+ 	if (unlikely(!psock))
+ 		return tcp_recvmsg(sk, msg, len, nonblock, flags, addr_len);
+-	if (unlikely(flags & MSG_ERRQUEUE))
+-		return inet_recv_error(sk, msg, len, addr_len);
+ 	if (!skb_queue_empty(&sk->sk_receive_queue) &&
+-	    sk_psock_queue_empty(psock))
++	    sk_psock_queue_empty(psock)) {
++		sk_psock_put(sk, psock);
+ 		return tcp_recvmsg(sk, msg, len, nonblock, flags, addr_len);
++	}
+ 	lock_sock(sk);
+ msg_bytes_ready:
+ 	copied = __tcp_bpf_recvmsg(sk, psock, msg, len, flags);
+@@ -200,7 +203,6 @@ static int bpf_tcp_ingress(struct sock *sk, struct sk_psock *psock,
+ 
+ 	if (!ret) {
+ 		msg->sg.start = i;
+-		msg->sg.size -= apply_bytes;
+ 		sk_psock_queue_msg(psock, tmp);
+ 		sk_psock_data_ready(sk, psock);
+ 	} else {
+diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c
+index 6b6b57000dad..e17d396102ce 100644
+--- a/net/ipv4/tcp_input.c
++++ b/net/ipv4/tcp_input.c
+@@ -4761,7 +4761,8 @@ void tcp_data_ready(struct sock *sk)
+ 	const struct tcp_sock *tp = tcp_sk(sk);
+ 	int avail = tp->rcv_nxt - tp->copied_seq;
+ 
+-	if (avail < sk->sk_rcvlowat && !sock_flag(sk, SOCK_DONE))
++	if (avail < sk->sk_rcvlowat && !tcp_rmem_pressure(sk) &&
++	    !sock_flag(sk, SOCK_DONE))
+ 		return;
+ 
+ 	sk->sk_data_ready(sk);
+diff --git a/net/ipv6/calipso.c b/net/ipv6/calipso.c
+index 221c81f85cbf..8d3f66c310db 100644
+--- a/net/ipv6/calipso.c
++++ b/net/ipv6/calipso.c
+@@ -1047,7 +1047,8 @@ static int calipso_opt_getattr(const unsigned char *calipso,
+ 			goto getattr_return;
+ 		}
+ 
+-		secattr->flags |= NETLBL_SECATTR_MLS_CAT;
++		if (secattr->attr.mls.cat)
++			secattr->flags |= NETLBL_SECATTR_MLS_CAT;
+ 	}
+ 
+ 	secattr->type = NETLBL_NLTYPE_CALIPSO;
+diff --git a/net/ipv6/route.c b/net/ipv6/route.c
+index 42d0596dd398..21ee5bcaeb91 100644
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -2725,8 +2725,10 @@ static void __ip6_rt_update_pmtu(struct dst_entry *dst, const struct sock *sk,
+ 	const struct in6_addr *daddr, *saddr;
+ 	struct rt6_info *rt6 = (struct rt6_info *)dst;
+ 
+-	if (dst_metric_locked(dst, RTAX_MTU))
+-		return;
++	/* Note: do *NOT* check dst_metric_locked(dst, RTAX_MTU)
++	 * IPv6 pmtu discovery isn't optional, so 'mtu lock' cannot disable it.
++	 * [see also comment in rt6_mtu_change_route()]
++	 */
+ 
+ 	if (iph) {
+ 		daddr = &iph->daddr;
+diff --git a/net/mptcp/subflow.c b/net/mptcp/subflow.c
+index 65122edf60aa..b89bd70f890a 100644
+--- a/net/mptcp/subflow.c
++++ b/net/mptcp/subflow.c
+@@ -633,6 +633,16 @@ int mptcp_subflow_create_socket(struct sock *sk, struct socket **new_sock)
+ 	if (err)
+ 		return err;
+ 
++	/* the newly created socket really belongs to the owning MPTCP master
++	 * socket, even if for additional subflows the allocation is performed
++	 * by a kernel workqueue. Adjust inode references, so that the
++	 * procfs/diag interaces really show this one belonging to the correct
++	 * user.
++	 */
++	SOCK_INODE(sf)->i_ino = SOCK_INODE(sk->sk_socket)->i_ino;
++	SOCK_INODE(sf)->i_uid = SOCK_INODE(sk->sk_socket)->i_uid;
++	SOCK_INODE(sf)->i_gid = SOCK_INODE(sk->sk_socket)->i_gid;
++
+ 	subflow = mptcp_subflow_ctx(sf->sk);
+ 	pr_debug("subflow=%p", subflow);
+ 
+diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c
+index 1927fc296f95..d11a58348133 100644
+--- a/net/netfilter/nf_conntrack_core.c
++++ b/net/netfilter/nf_conntrack_core.c
+@@ -1517,9 +1517,9 @@ __nf_conntrack_alloc(struct net *net,
+ 	ct->status = 0;
+ 	ct->timeout = 0;
+ 	write_pnet(&ct->ct_net, net);
+-	memset(&ct->__nfct_init_offset[0], 0,
++	memset(&ct->__nfct_init_offset, 0,
+ 	       offsetof(struct nf_conn, proto) -
+-	       offsetof(struct nf_conn, __nfct_init_offset[0]));
++	       offsetof(struct nf_conn, __nfct_init_offset));
+ 
+ 	nf_ct_zone_add(ct, zone);
+ 
+@@ -2137,8 +2137,19 @@ get_next_corpse(int (*iter)(struct nf_conn *i, void *data),
+ 		nf_conntrack_lock(lockp);
+ 		if (*bucket < nf_conntrack_htable_size) {
+ 			hlist_nulls_for_each_entry(h, n, &nf_conntrack_hash[*bucket], hnnode) {
+-				if (NF_CT_DIRECTION(h) != IP_CT_DIR_ORIGINAL)
++				if (NF_CT_DIRECTION(h) != IP_CT_DIR_REPLY)
+ 					continue;
++				/* All nf_conn objects are added to hash table twice, one
++				 * for original direction tuple, once for the reply tuple.
++				 *
++				 * Exception: In the IPS_NAT_CLASH case, only the reply
++				 * tuple is added (the original tuple already existed for
++				 * a different object).
++				 *
++				 * We only need to call the iterator once for each
++				 * conntrack, so we just use the 'reply' direction
++				 * tuple while iterating.
++				 */
+ 				ct = nf_ct_tuplehash_to_ctrack(h);
+ 				if (iter(ct, data))
+ 					goto found;
+diff --git a/net/netfilter/nf_flow_table_core.c b/net/netfilter/nf_flow_table_core.c
+index 70ebebaf5bc1..0ee78a166378 100644
+--- a/net/netfilter/nf_flow_table_core.c
++++ b/net/netfilter/nf_flow_table_core.c
+@@ -271,7 +271,7 @@ static void flow_offload_del(struct nf_flowtable *flow_table,
+ 
+ 	if (nf_flow_has_expired(flow))
+ 		flow_offload_fixup_ct(flow->ct);
+-	else if (test_bit(NF_FLOW_TEARDOWN, &flow->flags))
++	else
+ 		flow_offload_fixup_ct_timeout(flow->ct);
+ 
+ 	flow_offload_free(flow);
+@@ -348,8 +348,10 @@ static void nf_flow_offload_gc_step(struct flow_offload *flow, void *data)
+ {
+ 	struct nf_flowtable *flow_table = data;
+ 
+-	if (nf_flow_has_expired(flow) || nf_ct_is_dying(flow->ct) ||
+-	    test_bit(NF_FLOW_TEARDOWN, &flow->flags)) {
++	if (nf_flow_has_expired(flow) || nf_ct_is_dying(flow->ct))
++		set_bit(NF_FLOW_TEARDOWN, &flow->flags);
++
++	if (test_bit(NF_FLOW_TEARDOWN, &flow->flags)) {
+ 		if (test_bit(NF_FLOW_HW, &flow->flags)) {
+ 			if (!test_bit(NF_FLOW_HW_DYING, &flow->flags))
+ 				nf_flow_offload_del(flow_table, flow);
+diff --git a/net/netfilter/nft_set_rbtree.c b/net/netfilter/nft_set_rbtree.c
+index 46d976969ca3..accbb54c2b71 100644
+--- a/net/netfilter/nft_set_rbtree.c
++++ b/net/netfilter/nft_set_rbtree.c
+@@ -79,6 +79,10 @@ static bool __nft_rbtree_lookup(const struct net *net, const struct nft_set *set
+ 				parent = rcu_dereference_raw(parent->rb_left);
+ 				continue;
+ 			}
++
++			if (nft_set_elem_expired(&rbe->ext))
++				return false;
++
+ 			if (nft_rbtree_interval_end(rbe)) {
+ 				if (nft_set_is_anonymous(set))
+ 					return false;
+@@ -94,6 +98,7 @@ static bool __nft_rbtree_lookup(const struct net *net, const struct nft_set *set
+ 
+ 	if (set->flags & NFT_SET_INTERVAL && interval != NULL &&
+ 	    nft_set_elem_active(&interval->ext, genmask) &&
++	    !nft_set_elem_expired(&interval->ext) &&
+ 	    nft_rbtree_interval_start(interval)) {
+ 		*ext = &interval->ext;
+ 		return true;
+@@ -154,6 +159,9 @@ static bool __nft_rbtree_get(const struct net *net, const struct nft_set *set,
+ 				continue;
+ 			}
+ 
++			if (nft_set_elem_expired(&rbe->ext))
++				return false;
++
+ 			if (!nft_set_ext_exists(&rbe->ext, NFT_SET_EXT_FLAGS) ||
+ 			    (*nft_set_ext_flags(&rbe->ext) & NFT_SET_ELEM_INTERVAL_END) ==
+ 			    (flags & NFT_SET_ELEM_INTERVAL_END)) {
+@@ -170,6 +178,7 @@ static bool __nft_rbtree_get(const struct net *net, const struct nft_set *set,
+ 
+ 	if (set->flags & NFT_SET_INTERVAL && interval != NULL &&
+ 	    nft_set_elem_active(&interval->ext, genmask) &&
++	    !nft_set_elem_expired(&interval->ext) &&
+ 	    ((!nft_rbtree_interval_end(interval) &&
+ 	      !(flags & NFT_SET_ELEM_INTERVAL_END)) ||
+ 	     (nft_rbtree_interval_end(interval) &&
+@@ -418,6 +427,8 @@ static void nft_rbtree_walk(const struct nft_ctx *ctx,
+ 
+ 		if (iter->count < iter->skip)
+ 			goto cont;
++		if (nft_set_elem_expired(&rbe->ext))
++			goto cont;
+ 		if (!nft_set_elem_active(&rbe->ext, iter->genmask))
+ 			goto cont;
+ 
+diff --git a/net/netlabel/netlabel_kapi.c b/net/netlabel/netlabel_kapi.c
+index 409a3ae47ce2..5e1239cef000 100644
+--- a/net/netlabel/netlabel_kapi.c
++++ b/net/netlabel/netlabel_kapi.c
+@@ -734,6 +734,12 @@ int netlbl_catmap_getlong(struct netlbl_lsm_catmap *catmap,
+ 	if ((off & (BITS_PER_LONG - 1)) != 0)
+ 		return -EINVAL;
+ 
++	/* a null catmap is equivalent to an empty one */
++	if (!catmap) {
++		*offset = (u32)-1;
++		return 0;
++	}
++
+ 	if (off < catmap->startbit) {
+ 		off = catmap->startbit;
+ 		*offset = off;
+diff --git a/net/rds/message.c b/net/rds/message.c
+index 50f13f1d4ae0..2d43e13d6dd5 100644
+--- a/net/rds/message.c
++++ b/net/rds/message.c
+@@ -308,26 +308,20 @@ out:
+ /*
+  * RDS ops use this to grab SG entries from the rm's sg pool.
+  */
+-struct scatterlist *rds_message_alloc_sgs(struct rds_message *rm, int nents,
+-					  int *ret)
++struct scatterlist *rds_message_alloc_sgs(struct rds_message *rm, int nents)
+ {
+ 	struct scatterlist *sg_first = (struct scatterlist *) &rm[1];
+ 	struct scatterlist *sg_ret;
+ 
+-	if (WARN_ON(!ret))
+-		return NULL;
+-
+ 	if (nents <= 0) {
+ 		pr_warn("rds: alloc sgs failed! nents <= 0\n");
+-		*ret = -EINVAL;
+-		return NULL;
++		return ERR_PTR(-EINVAL);
+ 	}
+ 
+ 	if (rm->m_used_sgs + nents > rm->m_total_sgs) {
+ 		pr_warn("rds: alloc sgs failed! total %d used %d nents %d\n",
+ 			rm->m_total_sgs, rm->m_used_sgs, nents);
+-		*ret = -ENOMEM;
+-		return NULL;
++		return ERR_PTR(-ENOMEM);
+ 	}
+ 
+ 	sg_ret = &sg_first[rm->m_used_sgs];
+@@ -343,7 +337,6 @@ struct rds_message *rds_message_map_pages(unsigned long *page_addrs, unsigned in
+ 	unsigned int i;
+ 	int num_sgs = DIV_ROUND_UP(total_len, PAGE_SIZE);
+ 	int extra_bytes = num_sgs * sizeof(struct scatterlist);
+-	int ret;
+ 
+ 	rm = rds_message_alloc(extra_bytes, GFP_NOWAIT);
+ 	if (!rm)
+@@ -352,10 +345,10 @@ struct rds_message *rds_message_map_pages(unsigned long *page_addrs, unsigned in
+ 	set_bit(RDS_MSG_PAGEVEC, &rm->m_flags);
+ 	rm->m_inc.i_hdr.h_len = cpu_to_be32(total_len);
+ 	rm->data.op_nents = DIV_ROUND_UP(total_len, PAGE_SIZE);
+-	rm->data.op_sg = rds_message_alloc_sgs(rm, num_sgs, &ret);
+-	if (!rm->data.op_sg) {
++	rm->data.op_sg = rds_message_alloc_sgs(rm, num_sgs);
++	if (IS_ERR(rm->data.op_sg)) {
+ 		rds_message_put(rm);
+-		return ERR_PTR(ret);
++		return ERR_CAST(rm->data.op_sg);
+ 	}
+ 
+ 	for (i = 0; i < rm->data.op_nents; ++i) {
+diff --git a/net/rds/rdma.c b/net/rds/rdma.c
+index 585e6b3b69ce..554ea7f0277f 100644
+--- a/net/rds/rdma.c
++++ b/net/rds/rdma.c
+@@ -664,9 +664,11 @@ int rds_cmsg_rdma_args(struct rds_sock *rs, struct rds_message *rm,
+ 	op->op_odp_mr = NULL;
+ 
+ 	WARN_ON(!nr_pages);
+-	op->op_sg = rds_message_alloc_sgs(rm, nr_pages, &ret);
+-	if (!op->op_sg)
++	op->op_sg = rds_message_alloc_sgs(rm, nr_pages);
++	if (IS_ERR(op->op_sg)) {
++		ret = PTR_ERR(op->op_sg);
+ 		goto out_pages;
++	}
+ 
+ 	if (op->op_notify || op->op_recverr) {
+ 		/* We allocate an uninitialized notifier here, because
+@@ -905,9 +907,11 @@ int rds_cmsg_atomic(struct rds_sock *rs, struct rds_message *rm,
+ 	rm->atomic.op_silent = !!(args->flags & RDS_RDMA_SILENT);
+ 	rm->atomic.op_active = 1;
+ 	rm->atomic.op_recverr = rs->rs_recverr;
+-	rm->atomic.op_sg = rds_message_alloc_sgs(rm, 1, &ret);
+-	if (!rm->atomic.op_sg)
++	rm->atomic.op_sg = rds_message_alloc_sgs(rm, 1);
++	if (IS_ERR(rm->atomic.op_sg)) {
++		ret = PTR_ERR(rm->atomic.op_sg);
+ 		goto err;
++	}
+ 
+ 	/* verify 8 byte-aligned */
+ 	if (args->local_addr & 0x7) {
+diff --git a/net/rds/rds.h b/net/rds/rds.h
+index e4a603523083..b8b7ad766046 100644
+--- a/net/rds/rds.h
++++ b/net/rds/rds.h
+@@ -852,8 +852,7 @@ rds_conn_connecting(struct rds_connection *conn)
+ 
+ /* message.c */
+ struct rds_message *rds_message_alloc(unsigned int nents, gfp_t gfp);
+-struct scatterlist *rds_message_alloc_sgs(struct rds_message *rm, int nents,
+-					  int *ret);
++struct scatterlist *rds_message_alloc_sgs(struct rds_message *rm, int nents);
+ int rds_message_copy_from_user(struct rds_message *rm, struct iov_iter *from,
+ 			       bool zcopy);
+ struct rds_message *rds_message_map_pages(unsigned long *page_addrs, unsigned int total_len);
+diff --git a/net/rds/send.c b/net/rds/send.c
+index 82dcd8b84fe7..68e2bdb08fd0 100644
+--- a/net/rds/send.c
++++ b/net/rds/send.c
+@@ -1274,9 +1274,11 @@ int rds_sendmsg(struct socket *sock, struct msghdr *msg, size_t payload_len)
+ 
+ 	/* Attach data to the rm */
+ 	if (payload_len) {
+-		rm->data.op_sg = rds_message_alloc_sgs(rm, num_sgs, &ret);
+-		if (!rm->data.op_sg)
++		rm->data.op_sg = rds_message_alloc_sgs(rm, num_sgs);
++		if (IS_ERR(rm->data.op_sg)) {
++			ret = PTR_ERR(rm->data.op_sg);
+ 			goto out;
++		}
+ 		ret = rds_message_copy_from_user(rm, &msg->msg_iter, zcopy);
+ 		if (ret)
+ 			goto out;
+diff --git a/net/sched/cls_api.c b/net/sched/cls_api.c
+index c2cdd0fc2e70..68c8fc6f535c 100644
+--- a/net/sched/cls_api.c
++++ b/net/sched/cls_api.c
+@@ -2005,6 +2005,7 @@ replay:
+ 		err = PTR_ERR(block);
+ 		goto errout;
+ 	}
++	block->classid = parent;
+ 
+ 	chain_index = tca[TCA_CHAIN] ? nla_get_u32(tca[TCA_CHAIN]) : 0;
+ 	if (chain_index > TC_ACT_EXT_VAL_MASK) {
+@@ -2547,12 +2548,10 @@ static int tc_dump_tfilter(struct sk_buff *skb, struct netlink_callback *cb)
+ 			return skb->len;
+ 
+ 		parent = tcm->tcm_parent;
+-		if (!parent) {
++		if (!parent)
+ 			q = dev->qdisc;
+-			parent = q->handle;
+-		} else {
++		else
+ 			q = qdisc_lookup(dev, TC_H_MAJ(tcm->tcm_parent));
+-		}
+ 		if (!q)
+ 			goto out;
+ 		cops = q->ops->cl_ops;
+@@ -2568,6 +2567,7 @@ static int tc_dump_tfilter(struct sk_buff *skb, struct netlink_callback *cb)
+ 		block = cops->tcf_block(q, cl, NULL);
+ 		if (!block)
+ 			goto out;
++		parent = block->classid;
+ 		if (tcf_block_shared(block))
+ 			q = NULL;
+ 	}
+diff --git a/net/sunrpc/auth_gss/auth_gss.c b/net/sunrpc/auth_gss/auth_gss.c
+index 2dc740acb3bf..a7ad150fd4ee 100644
+--- a/net/sunrpc/auth_gss/auth_gss.c
++++ b/net/sunrpc/auth_gss/auth_gss.c
+@@ -2030,7 +2030,6 @@ gss_unwrap_resp_priv(struct rpc_task *task, struct rpc_cred *cred,
+ 	struct xdr_buf *rcv_buf = &rqstp->rq_rcv_buf;
+ 	struct kvec *head = rqstp->rq_rcv_buf.head;
+ 	struct rpc_auth *auth = cred->cr_auth;
+-	unsigned int savedlen = rcv_buf->len;
+ 	u32 offset, opaque_len, maj_stat;
+ 	__be32 *p;
+ 
+@@ -2041,9 +2040,9 @@ gss_unwrap_resp_priv(struct rpc_task *task, struct rpc_cred *cred,
+ 	offset = (u8 *)(p) - (u8 *)head->iov_base;
+ 	if (offset + opaque_len > rcv_buf->len)
+ 		goto unwrap_failed;
+-	rcv_buf->len = offset + opaque_len;
+ 
+-	maj_stat = gss_unwrap(ctx->gc_gss_ctx, offset, rcv_buf);
++	maj_stat = gss_unwrap(ctx->gc_gss_ctx, offset,
++			      offset + opaque_len, rcv_buf);
+ 	if (maj_stat == GSS_S_CONTEXT_EXPIRED)
+ 		clear_bit(RPCAUTH_CRED_UPTODATE, &cred->cr_flags);
+ 	if (maj_stat != GSS_S_COMPLETE)
+@@ -2057,10 +2056,9 @@ gss_unwrap_resp_priv(struct rpc_task *task, struct rpc_cred *cred,
+ 	 */
+ 	xdr_init_decode(xdr, rcv_buf, p, rqstp);
+ 
+-	auth->au_rslack = auth->au_verfsize + 2 +
+-			  XDR_QUADLEN(savedlen - rcv_buf->len);
+-	auth->au_ralign = auth->au_verfsize + 2 +
+-			  XDR_QUADLEN(savedlen - rcv_buf->len);
++	auth->au_rslack = auth->au_verfsize + 2 + ctx->gc_gss_ctx->slack;
++	auth->au_ralign = auth->au_verfsize + 2 + ctx->gc_gss_ctx->align;
++
+ 	return 0;
+ unwrap_failed:
+ 	trace_rpcgss_unwrap_failed(task);
+diff --git a/net/sunrpc/auth_gss/gss_krb5_crypto.c b/net/sunrpc/auth_gss/gss_krb5_crypto.c
+index 6f2d30d7b766..e7180da1fc6a 100644
+--- a/net/sunrpc/auth_gss/gss_krb5_crypto.c
++++ b/net/sunrpc/auth_gss/gss_krb5_crypto.c
+@@ -851,8 +851,8 @@ out_err:
+ }
+ 
+ u32
+-gss_krb5_aes_decrypt(struct krb5_ctx *kctx, u32 offset, struct xdr_buf *buf,
+-		     u32 *headskip, u32 *tailskip)
++gss_krb5_aes_decrypt(struct krb5_ctx *kctx, u32 offset, u32 len,
++		     struct xdr_buf *buf, u32 *headskip, u32 *tailskip)
+ {
+ 	struct xdr_buf subbuf;
+ 	u32 ret = 0;
+@@ -881,7 +881,7 @@ gss_krb5_aes_decrypt(struct krb5_ctx *kctx, u32 offset, struct xdr_buf *buf,
+ 
+ 	/* create a segment skipping the header and leaving out the checksum */
+ 	xdr_buf_subsegment(buf, &subbuf, offset + GSS_KRB5_TOK_HDR_LEN,
+-				    (buf->len - offset - GSS_KRB5_TOK_HDR_LEN -
++				    (len - offset - GSS_KRB5_TOK_HDR_LEN -
+ 				     kctx->gk5e->cksumlength));
+ 
+ 	nblocks = (subbuf.len + blocksize - 1) / blocksize;
+@@ -926,7 +926,7 @@ gss_krb5_aes_decrypt(struct krb5_ctx *kctx, u32 offset, struct xdr_buf *buf,
+ 		goto out_err;
+ 
+ 	/* Get the packet's hmac value */
+-	ret = read_bytes_from_xdr_buf(buf, buf->len - kctx->gk5e->cksumlength,
++	ret = read_bytes_from_xdr_buf(buf, len - kctx->gk5e->cksumlength,
+ 				      pkt_hmac, kctx->gk5e->cksumlength);
+ 	if (ret)
+ 		goto out_err;
+diff --git a/net/sunrpc/auth_gss/gss_krb5_wrap.c b/net/sunrpc/auth_gss/gss_krb5_wrap.c
+index 6c1920eed771..cf0fd170ac18 100644
+--- a/net/sunrpc/auth_gss/gss_krb5_wrap.c
++++ b/net/sunrpc/auth_gss/gss_krb5_wrap.c
+@@ -261,7 +261,9 @@ gss_wrap_kerberos_v1(struct krb5_ctx *kctx, int offset,
+ }
+ 
+ static u32
+-gss_unwrap_kerberos_v1(struct krb5_ctx *kctx, int offset, struct xdr_buf *buf)
++gss_unwrap_kerberos_v1(struct krb5_ctx *kctx, int offset, int len,
++		       struct xdr_buf *buf, unsigned int *slack,
++		       unsigned int *align)
+ {
+ 	int			signalg;
+ 	int			sealalg;
+@@ -279,12 +281,13 @@ gss_unwrap_kerberos_v1(struct krb5_ctx *kctx, int offset, struct xdr_buf *buf)
+ 	u32			conflen = kctx->gk5e->conflen;
+ 	int			crypt_offset;
+ 	u8			*cksumkey;
++	unsigned int		saved_len = buf->len;
+ 
+ 	dprintk("RPC:       gss_unwrap_kerberos\n");
+ 
+ 	ptr = (u8 *)buf->head[0].iov_base + offset;
+ 	if (g_verify_token_header(&kctx->mech_used, &bodysize, &ptr,
+-					buf->len - offset))
++					len - offset))
+ 		return GSS_S_DEFECTIVE_TOKEN;
+ 
+ 	if ((ptr[0] != ((KG_TOK_WRAP_MSG >> 8) & 0xff)) ||
+@@ -324,6 +327,7 @@ gss_unwrap_kerberos_v1(struct krb5_ctx *kctx, int offset, struct xdr_buf *buf)
+ 	    (!kctx->initiate && direction != 0))
+ 		return GSS_S_BAD_SIG;
+ 
++	buf->len = len;
+ 	if (kctx->enctype == ENCTYPE_ARCFOUR_HMAC) {
+ 		struct crypto_sync_skcipher *cipher;
+ 		int err;
+@@ -376,11 +380,15 @@ gss_unwrap_kerberos_v1(struct krb5_ctx *kctx, int offset, struct xdr_buf *buf)
+ 	data_len = (buf->head[0].iov_base + buf->head[0].iov_len) - data_start;
+ 	memmove(orig_start, data_start, data_len);
+ 	buf->head[0].iov_len -= (data_start - orig_start);
+-	buf->len -= (data_start - orig_start);
++	buf->len = len - (data_start - orig_start);
+ 
+ 	if (gss_krb5_remove_padding(buf, blocksize))
+ 		return GSS_S_DEFECTIVE_TOKEN;
+ 
++	/* slack must include room for krb5 padding */
++	*slack = XDR_QUADLEN(saved_len - buf->len);
++	/* The GSS blob always precedes the RPC message payload */
++	*align = *slack;
+ 	return GSS_S_COMPLETE;
+ }
+ 
+@@ -486,7 +494,9 @@ gss_wrap_kerberos_v2(struct krb5_ctx *kctx, u32 offset,
+ }
+ 
+ static u32
+-gss_unwrap_kerberos_v2(struct krb5_ctx *kctx, int offset, struct xdr_buf *buf)
++gss_unwrap_kerberos_v2(struct krb5_ctx *kctx, int offset, int len,
++		       struct xdr_buf *buf, unsigned int *slack,
++		       unsigned int *align)
+ {
+ 	time64_t	now;
+ 	u8		*ptr;
+@@ -532,7 +542,7 @@ gss_unwrap_kerberos_v2(struct krb5_ctx *kctx, int offset, struct xdr_buf *buf)
+ 	if (rrc != 0)
+ 		rotate_left(offset + 16, buf, rrc);
+ 
+-	err = (*kctx->gk5e->decrypt_v2)(kctx, offset, buf,
++	err = (*kctx->gk5e->decrypt_v2)(kctx, offset, len, buf,
+ 					&headskip, &tailskip);
+ 	if (err)
+ 		return GSS_S_FAILURE;
+@@ -542,7 +552,7 @@ gss_unwrap_kerberos_v2(struct krb5_ctx *kctx, int offset, struct xdr_buf *buf)
+ 	 * it against the original
+ 	 */
+ 	err = read_bytes_from_xdr_buf(buf,
+-				buf->len - GSS_KRB5_TOK_HDR_LEN - tailskip,
++				len - GSS_KRB5_TOK_HDR_LEN - tailskip,
+ 				decrypted_hdr, GSS_KRB5_TOK_HDR_LEN);
+ 	if (err) {
+ 		dprintk("%s: error %u getting decrypted_hdr\n", __func__, err);
+@@ -568,18 +578,19 @@ gss_unwrap_kerberos_v2(struct krb5_ctx *kctx, int offset, struct xdr_buf *buf)
+ 	 * Note that buf->head[0].iov_len may indicate the available
+ 	 * head buffer space rather than that actually occupied.
+ 	 */
+-	movelen = min_t(unsigned int, buf->head[0].iov_len, buf->len);
++	movelen = min_t(unsigned int, buf->head[0].iov_len, len);
+ 	movelen -= offset + GSS_KRB5_TOK_HDR_LEN + headskip;
+-	if (offset + GSS_KRB5_TOK_HDR_LEN + headskip + movelen >
+-	    buf->head[0].iov_len)
+-		return GSS_S_FAILURE;
++	BUG_ON(offset + GSS_KRB5_TOK_HDR_LEN + headskip + movelen >
++							buf->head[0].iov_len);
+ 	memmove(ptr, ptr + GSS_KRB5_TOK_HDR_LEN + headskip, movelen);
+ 	buf->head[0].iov_len -= GSS_KRB5_TOK_HDR_LEN + headskip;
+-	buf->len -= GSS_KRB5_TOK_HDR_LEN + headskip;
++	buf->len = len - GSS_KRB5_TOK_HDR_LEN + headskip;
+ 
+ 	/* Trim off the trailing "extra count" and checksum blob */
+-	buf->len -= ec + GSS_KRB5_TOK_HDR_LEN + tailskip;
++	xdr_buf_trim(buf, ec + GSS_KRB5_TOK_HDR_LEN + tailskip);
+ 
++	*align = XDR_QUADLEN(GSS_KRB5_TOK_HDR_LEN + headskip);
++	*slack = *align + XDR_QUADLEN(ec + GSS_KRB5_TOK_HDR_LEN + tailskip);
+ 	return GSS_S_COMPLETE;
+ }
+ 
+@@ -603,7 +614,8 @@ gss_wrap_kerberos(struct gss_ctx *gctx, int offset,
+ }
+ 
+ u32
+-gss_unwrap_kerberos(struct gss_ctx *gctx, int offset, struct xdr_buf *buf)
++gss_unwrap_kerberos(struct gss_ctx *gctx, int offset,
++		    int len, struct xdr_buf *buf)
+ {
+ 	struct krb5_ctx	*kctx = gctx->internal_ctx_id;
+ 
+@@ -613,9 +625,11 @@ gss_unwrap_kerberos(struct gss_ctx *gctx, int offset, struct xdr_buf *buf)
+ 	case ENCTYPE_DES_CBC_RAW:
+ 	case ENCTYPE_DES3_CBC_RAW:
+ 	case ENCTYPE_ARCFOUR_HMAC:
+-		return gss_unwrap_kerberos_v1(kctx, offset, buf);
++		return gss_unwrap_kerberos_v1(kctx, offset, len, buf,
++					      &gctx->slack, &gctx->align);
+ 	case ENCTYPE_AES128_CTS_HMAC_SHA1_96:
+ 	case ENCTYPE_AES256_CTS_HMAC_SHA1_96:
+-		return gss_unwrap_kerberos_v2(kctx, offset, buf);
++		return gss_unwrap_kerberos_v2(kctx, offset, len, buf,
++					      &gctx->slack, &gctx->align);
+ 	}
+ }
+diff --git a/net/sunrpc/auth_gss/gss_mech_switch.c b/net/sunrpc/auth_gss/gss_mech_switch.c
+index db550bfc2642..69316ab1b9fa 100644
+--- a/net/sunrpc/auth_gss/gss_mech_switch.c
++++ b/net/sunrpc/auth_gss/gss_mech_switch.c
+@@ -411,10 +411,11 @@ gss_wrap(struct gss_ctx	*ctx_id,
+ u32
+ gss_unwrap(struct gss_ctx	*ctx_id,
+ 	   int			offset,
++	   int			len,
+ 	   struct xdr_buf	*buf)
+ {
+ 	return ctx_id->mech_type->gm_ops
+-		->gss_unwrap(ctx_id, offset, buf);
++		->gss_unwrap(ctx_id, offset, len, buf);
+ }
+ 
+ 
+diff --git a/net/sunrpc/auth_gss/svcauth_gss.c b/net/sunrpc/auth_gss/svcauth_gss.c
+index 65b67b257302..322fd48887f9 100644
+--- a/net/sunrpc/auth_gss/svcauth_gss.c
++++ b/net/sunrpc/auth_gss/svcauth_gss.c
+@@ -900,7 +900,7 @@ unwrap_integ_data(struct svc_rqst *rqstp, struct xdr_buf *buf, u32 seq, struct g
+ 	if (svc_getnl(&buf->head[0]) != seq)
+ 		goto out;
+ 	/* trim off the mic and padding at the end before returning */
+-	buf->len -= 4 + round_up_to_quad(mic.len);
++	xdr_buf_trim(buf, round_up_to_quad(mic.len) + 4);
+ 	stat = 0;
+ out:
+ 	kfree(mic.data);
+@@ -928,7 +928,7 @@ static int
+ unwrap_priv_data(struct svc_rqst *rqstp, struct xdr_buf *buf, u32 seq, struct gss_ctx *ctx)
+ {
+ 	u32 priv_len, maj_stat;
+-	int pad, saved_len, remaining_len, offset;
++	int pad, remaining_len, offset;
+ 
+ 	clear_bit(RQ_SPLICE_OK, &rqstp->rq_flags);
+ 
+@@ -948,12 +948,8 @@ unwrap_priv_data(struct svc_rqst *rqstp, struct xdr_buf *buf, u32 seq, struct gs
+ 	buf->len -= pad;
+ 	fix_priv_head(buf, pad);
+ 
+-	/* Maybe it would be better to give gss_unwrap a length parameter: */
+-	saved_len = buf->len;
+-	buf->len = priv_len;
+-	maj_stat = gss_unwrap(ctx, 0, buf);
++	maj_stat = gss_unwrap(ctx, 0, priv_len, buf);
+ 	pad = priv_len - buf->len;
+-	buf->len = saved_len;
+ 	buf->len -= pad;
+ 	/* The upper layers assume the buffer is aligned on 4-byte boundaries.
+ 	 * In the krb5p case, at least, the data ends up offset, so we need to
+diff --git a/net/sunrpc/clnt.c b/net/sunrpc/clnt.c
+index 7324b21f923e..3ceaefb2f0bc 100644
+--- a/net/sunrpc/clnt.c
++++ b/net/sunrpc/clnt.c
+@@ -2416,6 +2416,11 @@ rpc_check_timeout(struct rpc_task *task)
+ {
+ 	struct rpc_clnt	*clnt = task->tk_client;
+ 
++	if (RPC_SIGNALLED(task)) {
++		rpc_call_rpcerror(task, -ERESTARTSYS);
++		return;
++	}
++
+ 	if (xprt_adjust_timeout(task->tk_rqstp) == 0)
+ 		return;
+ 
+diff --git a/net/sunrpc/xdr.c b/net/sunrpc/xdr.c
+index e5497dc2475b..f6da616267ce 100644
+--- a/net/sunrpc/xdr.c
++++ b/net/sunrpc/xdr.c
+@@ -1150,6 +1150,47 @@ xdr_buf_subsegment(struct xdr_buf *buf, struct xdr_buf *subbuf,
+ }
+ EXPORT_SYMBOL_GPL(xdr_buf_subsegment);
+ 
++/**
++ * xdr_buf_trim - lop at most "len" bytes off the end of "buf"
++ * @buf: buf to be trimmed
++ * @len: number of bytes to reduce "buf" by
++ *
++ * Trim an xdr_buf by the given number of bytes by fixing up the lengths. Note
++ * that it's possible that we'll trim less than that amount if the xdr_buf is
++ * too small, or if (for instance) it's all in the head and the parser has
++ * already read too far into it.
++ */
++void xdr_buf_trim(struct xdr_buf *buf, unsigned int len)
++{
++	size_t cur;
++	unsigned int trim = len;
++
++	if (buf->tail[0].iov_len) {
++		cur = min_t(size_t, buf->tail[0].iov_len, trim);
++		buf->tail[0].iov_len -= cur;
++		trim -= cur;
++		if (!trim)
++			goto fix_len;
++	}
++
++	if (buf->page_len) {
++		cur = min_t(unsigned int, buf->page_len, trim);
++		buf->page_len -= cur;
++		trim -= cur;
++		if (!trim)
++			goto fix_len;
++	}
++
++	if (buf->head[0].iov_len) {
++		cur = min_t(size_t, buf->head[0].iov_len, trim);
++		buf->head[0].iov_len -= cur;
++		trim -= cur;
++	}
++fix_len:
++	buf->len -= (len - trim);
++}
++EXPORT_SYMBOL_GPL(xdr_buf_trim);
++
+ static void __read_bytes_from_xdr_buf(struct xdr_buf *subbuf, void *obj, unsigned int len)
+ {
+ 	unsigned int this_len;
+diff --git a/net/sunrpc/xprtrdma/backchannel.c b/net/sunrpc/xprtrdma/backchannel.c
+index 1a0ae0c61353..4b43910a6ed2 100644
+--- a/net/sunrpc/xprtrdma/backchannel.c
++++ b/net/sunrpc/xprtrdma/backchannel.c
+@@ -115,7 +115,7 @@ int xprt_rdma_bc_send_reply(struct rpc_rqst *rqst)
+ 	if (rc < 0)
+ 		goto failed_marshal;
+ 
+-	if (rpcrdma_ep_post(&r_xprt->rx_ia, &r_xprt->rx_ep, req))
++	if (rpcrdma_post_sends(r_xprt, req))
+ 		goto drop_connection;
+ 	return 0;
+ 
+diff --git a/net/sunrpc/xprtrdma/frwr_ops.c b/net/sunrpc/xprtrdma/frwr_ops.c
+index 125297c9aa3e..79059d48f52b 100644
+--- a/net/sunrpc/xprtrdma/frwr_ops.c
++++ b/net/sunrpc/xprtrdma/frwr_ops.c
+@@ -372,18 +372,22 @@ static void frwr_wc_fastreg(struct ib_cq *cq, struct ib_wc *wc)
+ }
+ 
+ /**
+- * frwr_send - post Send WR containing the RPC Call message
+- * @ia: interface adapter
+- * @req: Prepared RPC Call
++ * frwr_send - post Send WRs containing the RPC Call message
++ * @r_xprt: controlling transport instance
++ * @req: prepared RPC Call
+  *
+  * For FRWR, chain any FastReg WRs to the Send WR. Only a
+  * single ib_post_send call is needed to register memory
+  * and then post the Send WR.
+  *
+- * Returns the result of ib_post_send.
++ * Returns the return code from ib_post_send.
++ *
++ * Caller must hold the transport send lock to ensure that the
++ * pointers to the transport's rdma_cm_id and QP are stable.
+  */
+-int frwr_send(struct rpcrdma_ia *ia, struct rpcrdma_req *req)
++int frwr_send(struct rpcrdma_xprt *r_xprt, struct rpcrdma_req *req)
+ {
++	struct rpcrdma_ia *ia = &r_xprt->rx_ia;
+ 	struct ib_send_wr *post_wr;
+ 	struct rpcrdma_mr *mr;
+ 
+diff --git a/net/sunrpc/xprtrdma/transport.c b/net/sunrpc/xprtrdma/transport.c
+index 3cfeba68ee9a..46e7949788e1 100644
+--- a/net/sunrpc/xprtrdma/transport.c
++++ b/net/sunrpc/xprtrdma/transport.c
+@@ -694,7 +694,7 @@ xprt_rdma_send_request(struct rpc_rqst *rqst)
+ 		goto drop_connection;
+ 	rqst->rq_xtime = ktime_get();
+ 
+-	if (rpcrdma_ep_post(&r_xprt->rx_ia, &r_xprt->rx_ep, req))
++	if (rpcrdma_post_sends(r_xprt, req))
+ 		goto drop_connection;
+ 
+ 	rqst->rq_xmit_bytes_sent += rqst->rq_snd_buf.len;
+diff --git a/net/sunrpc/xprtrdma/verbs.c b/net/sunrpc/xprtrdma/verbs.c
+index 353f61ac8d51..a48b99f3682c 100644
+--- a/net/sunrpc/xprtrdma/verbs.c
++++ b/net/sunrpc/xprtrdma/verbs.c
+@@ -1502,20 +1502,17 @@ static void rpcrdma_regbuf_free(struct rpcrdma_regbuf *rb)
+ }
+ 
+ /**
+- * rpcrdma_ep_post - Post WRs to a transport's Send Queue
+- * @ia: transport's device information
+- * @ep: transport's RDMA endpoint information
++ * rpcrdma_post_sends - Post WRs to a transport's Send Queue
++ * @r_xprt: controlling transport instance
+  * @req: rpcrdma_req containing the Send WR to post
+  *
+  * Returns 0 if the post was successful, otherwise -ENOTCONN
+  * is returned.
+  */
+-int
+-rpcrdma_ep_post(struct rpcrdma_ia *ia,
+-		struct rpcrdma_ep *ep,
+-		struct rpcrdma_req *req)
++int rpcrdma_post_sends(struct rpcrdma_xprt *r_xprt, struct rpcrdma_req *req)
+ {
+ 	struct ib_send_wr *send_wr = &req->rl_wr;
++	struct rpcrdma_ep *ep = &r_xprt->rx_ep;
+ 	int rc;
+ 
+ 	if (!ep->rep_send_count || kref_read(&req->rl_kref) > 1) {
+@@ -1526,8 +1523,8 @@ rpcrdma_ep_post(struct rpcrdma_ia *ia,
+ 		--ep->rep_send_count;
+ 	}
+ 
+-	rc = frwr_send(ia, req);
+-	trace_xprtrdma_post_send(req, rc);
++	trace_xprtrdma_post_send(req);
++	rc = frwr_send(r_xprt, req);
+ 	if (rc)
+ 		return -ENOTCONN;
+ 	return 0;
+diff --git a/net/sunrpc/xprtrdma/xprt_rdma.h b/net/sunrpc/xprtrdma/xprt_rdma.h
+index 37d5080c250b..600574a0d838 100644
+--- a/net/sunrpc/xprtrdma/xprt_rdma.h
++++ b/net/sunrpc/xprtrdma/xprt_rdma.h
+@@ -469,8 +469,7 @@ void rpcrdma_ep_destroy(struct rpcrdma_xprt *r_xprt);
+ int rpcrdma_ep_connect(struct rpcrdma_ep *, struct rpcrdma_ia *);
+ void rpcrdma_ep_disconnect(struct rpcrdma_ep *, struct rpcrdma_ia *);
+ 
+-int rpcrdma_ep_post(struct rpcrdma_ia *, struct rpcrdma_ep *,
+-				struct rpcrdma_req *);
++int rpcrdma_post_sends(struct rpcrdma_xprt *r_xprt, struct rpcrdma_req *req);
+ void rpcrdma_post_recvs(struct rpcrdma_xprt *r_xprt, bool temp);
+ 
+ /*
+@@ -544,7 +543,7 @@ struct rpcrdma_mr_seg *frwr_map(struct rpcrdma_xprt *r_xprt,
+ 				struct rpcrdma_mr_seg *seg,
+ 				int nsegs, bool writing, __be32 xid,
+ 				struct rpcrdma_mr *mr);
+-int frwr_send(struct rpcrdma_ia *ia, struct rpcrdma_req *req);
++int frwr_send(struct rpcrdma_xprt *r_xprt, struct rpcrdma_req *req);
+ void frwr_reminv(struct rpcrdma_rep *rep, struct list_head *mrs);
+ void frwr_unmap_sync(struct rpcrdma_xprt *r_xprt, struct rpcrdma_req *req);
+ void frwr_unmap_async(struct rpcrdma_xprt *r_xprt, struct rpcrdma_req *req);
+diff --git a/scripts/kallsyms.c b/scripts/kallsyms.c
+index 3e8dea6e0a95..6dc3078649fa 100644
+--- a/scripts/kallsyms.c
++++ b/scripts/kallsyms.c
+@@ -34,7 +34,7 @@ struct sym_entry {
+ 	unsigned int len;
+ 	unsigned int start_pos;
+ 	unsigned int percpu_absolute;
+-	unsigned char sym[0];
++	unsigned char sym[];
+ };
+ 
+ struct addr_range {
+diff --git a/sound/core/rawmidi.c b/sound/core/rawmidi.c
+index 20dd08e1f675..2a688b711a9a 100644
+--- a/sound/core/rawmidi.c
++++ b/sound/core/rawmidi.c
+@@ -120,6 +120,17 @@ static void snd_rawmidi_input_event_work(struct work_struct *work)
+ 		runtime->event(runtime->substream);
+ }
+ 
++/* buffer refcount management: call with runtime->lock held */
++static inline void snd_rawmidi_buffer_ref(struct snd_rawmidi_runtime *runtime)
++{
++	runtime->buffer_ref++;
++}
++
++static inline void snd_rawmidi_buffer_unref(struct snd_rawmidi_runtime *runtime)
++{
++	runtime->buffer_ref--;
++}
++
+ static int snd_rawmidi_runtime_create(struct snd_rawmidi_substream *substream)
+ {
+ 	struct snd_rawmidi_runtime *runtime;
+@@ -669,6 +680,11 @@ static int resize_runtime_buffer(struct snd_rawmidi_runtime *runtime,
+ 		if (!newbuf)
+ 			return -ENOMEM;
+ 		spin_lock_irq(&runtime->lock);
++		if (runtime->buffer_ref) {
++			spin_unlock_irq(&runtime->lock);
++			kvfree(newbuf);
++			return -EBUSY;
++		}
+ 		oldbuf = runtime->buffer;
+ 		runtime->buffer = newbuf;
+ 		runtime->buffer_size = params->buffer_size;
+@@ -1019,8 +1035,10 @@ static long snd_rawmidi_kernel_read1(struct snd_rawmidi_substream *substream,
+ 	long result = 0, count1;
+ 	struct snd_rawmidi_runtime *runtime = substream->runtime;
+ 	unsigned long appl_ptr;
++	int err = 0;
+ 
+ 	spin_lock_irqsave(&runtime->lock, flags);
++	snd_rawmidi_buffer_ref(runtime);
+ 	while (count > 0 && runtime->avail) {
+ 		count1 = runtime->buffer_size - runtime->appl_ptr;
+ 		if (count1 > count)
+@@ -1039,16 +1057,19 @@ static long snd_rawmidi_kernel_read1(struct snd_rawmidi_substream *substream,
+ 		if (userbuf) {
+ 			spin_unlock_irqrestore(&runtime->lock, flags);
+ 			if (copy_to_user(userbuf + result,
+-					 runtime->buffer + appl_ptr, count1)) {
+-				return result > 0 ? result : -EFAULT;
+-			}
++					 runtime->buffer + appl_ptr, count1))
++				err = -EFAULT;
+ 			spin_lock_irqsave(&runtime->lock, flags);
++			if (err)
++				goto out;
+ 		}
+ 		result += count1;
+ 		count -= count1;
+ 	}
++ out:
++	snd_rawmidi_buffer_unref(runtime);
+ 	spin_unlock_irqrestore(&runtime->lock, flags);
+-	return result;
++	return result > 0 ? result : err;
+ }
+ 
+ long snd_rawmidi_kernel_read(struct snd_rawmidi_substream *substream,
+@@ -1342,6 +1363,7 @@ static long snd_rawmidi_kernel_write1(struct snd_rawmidi_substream *substream,
+ 			return -EAGAIN;
+ 		}
+ 	}
++	snd_rawmidi_buffer_ref(runtime);
+ 	while (count > 0 && runtime->avail > 0) {
+ 		count1 = runtime->buffer_size - runtime->appl_ptr;
+ 		if (count1 > count)
+@@ -1373,6 +1395,7 @@ static long snd_rawmidi_kernel_write1(struct snd_rawmidi_substream *substream,
+ 	}
+       __end:
+ 	count1 = runtime->avail < runtime->buffer_size;
++	snd_rawmidi_buffer_unref(runtime);
+ 	spin_unlock_irqrestore(&runtime->lock, flags);
+ 	if (count1)
+ 		snd_rawmidi_output_trigger(substream, 1);
+diff --git a/sound/firewire/amdtp-stream-trace.h b/sound/firewire/amdtp-stream-trace.h
+index 16c7f6605511..26e7cb555d3c 100644
+--- a/sound/firewire/amdtp-stream-trace.h
++++ b/sound/firewire/amdtp-stream-trace.h
+@@ -66,8 +66,7 @@ TRACE_EVENT(amdtp_packet,
+ 		__entry->irq,
+ 		__entry->index,
+ 		__print_array(__get_dynamic_array(cip_header),
+-			      __get_dynamic_array_len(cip_header),
+-			      sizeof(u8)))
++			      __get_dynamic_array_len(cip_header), 1))
+ );
+ 
+ #endif
+diff --git a/sound/pci/hda/patch_hdmi.c b/sound/pci/hda/patch_hdmi.c
+index 0c1a59d5ad59..0f3250417b95 100644
+--- a/sound/pci/hda/patch_hdmi.c
++++ b/sound/pci/hda/patch_hdmi.c
+@@ -2320,7 +2320,9 @@ static int generic_hdmi_build_controls(struct hda_codec *codec)
+ 
+ 	for (pin_idx = 0; pin_idx < spec->num_pins; pin_idx++) {
+ 		struct hdmi_spec_per_pin *per_pin = get_pin(spec, pin_idx);
++		struct hdmi_eld *pin_eld = &per_pin->sink_eld;
+ 
++		pin_eld->eld_valid = false;
+ 		hdmi_present_sense(per_pin, 0);
+ 	}
+ 
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index da4863d7f7f2..d73c814358bf 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -5743,6 +5743,15 @@ static void alc233_alc662_fixup_lenovo_dual_codecs(struct hda_codec *codec,
+ 	}
+ }
+ 
++static void alc225_fixup_s3_pop_noise(struct hda_codec *codec,
++				      const struct hda_fixup *fix, int action)
++{
++	if (action != HDA_FIXUP_ACT_PRE_PROBE)
++		return;
++
++	codec->power_save_node = 1;
++}
++
+ /* Forcibly assign NID 0x03 to HP/LO while NID 0x02 to SPK for EQ */
+ static void alc274_fixup_bind_dacs(struct hda_codec *codec,
+ 				    const struct hda_fixup *fix, int action)
+@@ -5847,6 +5856,7 @@ enum {
+ 	ALC269_FIXUP_HP_LINE1_MIC1_LED,
+ 	ALC269_FIXUP_INV_DMIC,
+ 	ALC269_FIXUP_LENOVO_DOCK,
++	ALC269_FIXUP_LENOVO_DOCK_LIMIT_BOOST,
+ 	ALC269_FIXUP_NO_SHUTUP,
+ 	ALC286_FIXUP_SONY_MIC_NO_PRESENCE,
+ 	ALC269_FIXUP_PINCFG_NO_HP_TO_LINEOUT,
+@@ -5932,6 +5942,7 @@ enum {
+ 	ALC233_FIXUP_ACER_HEADSET_MIC,
+ 	ALC294_FIXUP_LENOVO_MIC_LOCATION,
+ 	ALC225_FIXUP_DELL_WYSE_MIC_NO_PRESENCE,
++	ALC225_FIXUP_S3_POP_NOISE,
+ 	ALC700_FIXUP_INTEL_REFERENCE,
+ 	ALC274_FIXUP_DELL_BIND_DACS,
+ 	ALC274_FIXUP_DELL_AIO_LINEOUT_VERB,
+@@ -5967,6 +5978,7 @@ enum {
+ 	ALC294_FIXUP_ASUS_DUAL_SPK,
+ 	ALC285_FIXUP_THINKPAD_HEADSET_JACK,
+ 	ALC294_FIXUP_ASUS_HPE,
++	ALC294_FIXUP_ASUS_COEF_1B,
+ 	ALC285_FIXUP_HP_GPIO_LED,
+ };
+ 
+@@ -6165,6 +6177,12 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.chained = true,
+ 		.chain_id = ALC269_FIXUP_PINCFG_NO_HP_TO_LINEOUT
+ 	},
++	[ALC269_FIXUP_LENOVO_DOCK_LIMIT_BOOST] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc269_fixup_limit_int_mic_boost,
++		.chained = true,
++		.chain_id = ALC269_FIXUP_LENOVO_DOCK,
++	},
+ 	[ALC269_FIXUP_PINCFG_NO_HP_TO_LINEOUT] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc269_fixup_pincfg_no_hp_to_lineout,
+@@ -6817,6 +6835,12 @@ static const struct hda_fixup alc269_fixups[] = {
+ 			{ }
+ 		},
+ 		.chained = true,
++		.chain_id = ALC225_FIXUP_S3_POP_NOISE
++	},
++	[ALC225_FIXUP_S3_POP_NOISE] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc225_fixup_s3_pop_noise,
++		.chained = true,
+ 		.chain_id = ALC269_FIXUP_HEADSET_MODE_NO_HP_MIC
+ 	},
+ 	[ALC700_FIXUP_INTEL_REFERENCE] = {
+@@ -7089,6 +7113,17 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.chained = true,
+ 		.chain_id = ALC294_FIXUP_ASUS_HEADSET_MIC
+ 	},
++	[ALC294_FIXUP_ASUS_COEF_1B] = {
++		.type = HDA_FIXUP_VERBS,
++		.v.verbs = (const struct hda_verb[]) {
++			/* Set bit 10 to correct noisy output after reboot from
++			 * Windows 10 (due to pop noise reduction?)
++			 */
++			{ 0x20, AC_VERB_SET_COEF_INDEX, 0x1b },
++			{ 0x20, AC_VERB_SET_PROC_COEF, 0x4e4b },
++			{ }
++		},
++	},
+ 	[ALC285_FIXUP_HP_GPIO_LED] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc285_fixup_hp_gpio_led,
+@@ -7260,6 +7295,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x19ce, "ASUS B9450FA", ALC294_FIXUP_ASUS_HPE),
+ 	SND_PCI_QUIRK(0x1043, 0x1a13, "Asus G73Jw", ALC269_FIXUP_ASUS_G73JW),
+ 	SND_PCI_QUIRK(0x1043, 0x1a30, "ASUS X705UD", ALC256_FIXUP_ASUS_MIC),
++	SND_PCI_QUIRK(0x1043, 0x1b11, "ASUS UX431DA", ALC294_FIXUP_ASUS_COEF_1B),
+ 	SND_PCI_QUIRK(0x1043, 0x1b13, "Asus U41SV", ALC269_FIXUP_INV_DMIC),
+ 	SND_PCI_QUIRK(0x1043, 0x1bbd, "ASUS Z550MA", ALC255_FIXUP_ASUS_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1043, 0x1c23, "Asus X55U", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+@@ -7301,7 +7337,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x17aa, 0x21b8, "Thinkpad Edge 14", ALC269_FIXUP_SKU_IGNORE),
+ 	SND_PCI_QUIRK(0x17aa, 0x21ca, "Thinkpad L412", ALC269_FIXUP_SKU_IGNORE),
+ 	SND_PCI_QUIRK(0x17aa, 0x21e9, "Thinkpad Edge 15", ALC269_FIXUP_SKU_IGNORE),
+-	SND_PCI_QUIRK(0x17aa, 0x21f6, "Thinkpad T530", ALC269_FIXUP_LENOVO_DOCK),
++	SND_PCI_QUIRK(0x17aa, 0x21f6, "Thinkpad T530", ALC269_FIXUP_LENOVO_DOCK_LIMIT_BOOST),
+ 	SND_PCI_QUIRK(0x17aa, 0x21fa, "Thinkpad X230", ALC269_FIXUP_LENOVO_DOCK),
+ 	SND_PCI_QUIRK(0x17aa, 0x21f3, "Thinkpad T430", ALC269_FIXUP_LENOVO_DOCK),
+ 	SND_PCI_QUIRK(0x17aa, 0x21fb, "Thinkpad T430s", ALC269_FIXUP_LENOVO_DOCK),
+@@ -7440,6 +7476,7 @@ static const struct hda_model_fixup alc269_fixup_models[] = {
+ 	{.id = ALC269_FIXUP_HEADSET_MODE, .name = "headset-mode"},
+ 	{.id = ALC269_FIXUP_HEADSET_MODE_NO_HP_MIC, .name = "headset-mode-no-hp-mic"},
+ 	{.id = ALC269_FIXUP_LENOVO_DOCK, .name = "lenovo-dock"},
++	{.id = ALC269_FIXUP_LENOVO_DOCK_LIMIT_BOOST, .name = "lenovo-dock-limit-boost"},
+ 	{.id = ALC269_FIXUP_HP_GPIO_LED, .name = "hp-gpio-led"},
+ 	{.id = ALC269_FIXUP_HP_DOCK_GPIO_MIC1_LED, .name = "hp-dock-gpio-mic1-led"},
+ 	{.id = ALC269_FIXUP_DELL1_MIC_NO_PRESENCE, .name = "dell-headset-multi"},
+@@ -8084,8 +8121,6 @@ static int patch_alc269(struct hda_codec *codec)
+ 		spec->gen.mixer_nid = 0;
+ 		break;
+ 	case 0x10ec0225:
+-		codec->power_save_node = 1;
+-		/* fall through */
+ 	case 0x10ec0295:
+ 	case 0x10ec0299:
+ 		spec->codec_variant = ALC269_TYPE_ALC225;
+diff --git a/sound/usb/quirks.c b/sound/usb/quirks.c
+index 0686e056e39b..732580bdc6a4 100644
+--- a/sound/usb/quirks.c
++++ b/sound/usb/quirks.c
+@@ -1592,13 +1592,14 @@ void snd_usb_ctl_msg_quirk(struct usb_device *dev, unsigned int pipe,
+ 	    && (requesttype & USB_TYPE_MASK) == USB_TYPE_CLASS)
+ 		msleep(20);
+ 
+-	/* Zoom R16/24, Logitech H650e, Jabra 550a needs a tiny delay here,
+-	 * otherwise requests like get/set frequency return as failed despite
+-	 * actually succeeding.
++	/* Zoom R16/24, Logitech H650e, Jabra 550a, Kingston HyperX needs a tiny
++	 * delay here, otherwise requests like get/set frequency return as
++	 * failed despite actually succeeding.
+ 	 */
+ 	if ((chip->usb_id == USB_ID(0x1686, 0x00dd) ||
+ 	     chip->usb_id == USB_ID(0x046d, 0x0a46) ||
+-	     chip->usb_id == USB_ID(0x0b0e, 0x0349)) &&
++	     chip->usb_id == USB_ID(0x0b0e, 0x0349) ||
++	     chip->usb_id == USB_ID(0x0951, 0x16ad)) &&
+ 	    (requesttype & USB_TYPE_MASK) == USB_TYPE_CLASS)
+ 		usleep_range(1000, 2000);
+ }
+diff --git a/tools/testing/selftests/bpf/prog_tests/mmap.c b/tools/testing/selftests/bpf/prog_tests/mmap.c
+index 16a814eb4d64..b0e789678aa4 100644
+--- a/tools/testing/selftests/bpf/prog_tests/mmap.c
++++ b/tools/testing/selftests/bpf/prog_tests/mmap.c
+@@ -197,6 +197,15 @@ void test_mmap(void)
+ 	CHECK_FAIL(map_data->val[far] != 3 * 321);
+ 
+ 	munmap(tmp2, 4 * page_size);
++
++	/* map all 4 pages, but with pg_off=1 page, should fail */
++	tmp1 = mmap(NULL, 4 * page_size, PROT_READ, MAP_SHARED | MAP_FIXED,
++		    data_map_fd, page_size /* initial page shift */);
++	if (CHECK(tmp1 != MAP_FAILED, "adv_mmap7", "unexpected success")) {
++		munmap(tmp1, 4 * page_size);
++		goto cleanup;
++	}
++
+ cleanup:
+ 	if (bss_mmaped)
+ 		CHECK_FAIL(munmap(bss_mmaped, bss_sz));
+diff --git a/tools/testing/selftests/bpf/progs/test_overhead.c b/tools/testing/selftests/bpf/progs/test_overhead.c
+index bfe9fbcb9684..e15c7589695e 100644
+--- a/tools/testing/selftests/bpf/progs/test_overhead.c
++++ b/tools/testing/selftests/bpf/progs/test_overhead.c
+@@ -33,13 +33,13 @@ int prog3(struct bpf_raw_tracepoint_args *ctx)
+ SEC("fentry/__set_task_comm")
+ int BPF_PROG(prog4, struct task_struct *tsk, const char *buf, bool exec)
+ {
+-	return !tsk;
++	return 0;
+ }
+ 
+ SEC("fexit/__set_task_comm")
+ int BPF_PROG(prog5, struct task_struct *tsk, const char *buf, bool exec)
+ {
+-	return !tsk;
++	return 0;
+ }
+ 
+ char _license[] SEC("license") = "GPL";
+diff --git a/tools/testing/selftests/ftrace/ftracetest b/tools/testing/selftests/ftrace/ftracetest
+index 063ecb290a5a..144308a757b7 100755
+--- a/tools/testing/selftests/ftrace/ftracetest
++++ b/tools/testing/selftests/ftrace/ftracetest
+@@ -29,8 +29,25 @@ err_ret=1
+ # kselftest skip code is 4
+ err_skip=4
+ 
++# cgroup RT scheduling prevents chrt commands from succeeding, which
++# induces failures in test wakeup tests.  Disable for the duration of
++# the tests.
++
++readonly sched_rt_runtime=/proc/sys/kernel/sched_rt_runtime_us
++
++sched_rt_runtime_orig=$(cat $sched_rt_runtime)
++
++setup() {
++  echo -1 > $sched_rt_runtime
++}
++
++cleanup() {
++  echo $sched_rt_runtime_orig > $sched_rt_runtime
++}
++
+ errexit() { # message
+   echo "Error: $1" 1>&2
++  cleanup
+   exit $err_ret
+ }
+ 
+@@ -39,6 +56,8 @@ if [ `id -u` -ne 0 ]; then
+   errexit "this must be run by root user"
+ fi
+ 
++setup
++
+ # Utilities
+ absdir() { # file_path
+   (cd `dirname $1`; pwd)
+@@ -235,6 +254,7 @@ TOTAL_RESULT=0
+ 
+ INSTANCE=
+ CASENO=0
++
+ testcase() { # testfile
+   CASENO=$((CASENO+1))
+   desc=`grep "^#[ \t]*description:" $1 | cut -f2 -d:`
+@@ -406,5 +426,7 @@ prlog "# of unsupported: " `echo $UNSUPPORTED_CASES | wc -w`
+ prlog "# of xfailed: " `echo $XFAILED_CASES | wc -w`
+ prlog "# of undefined(test bug): " `echo $UNDEFINED_CASES | wc -w`
+ 
++cleanup
++
+ # if no error, return 0
+ exit $TOTAL_RESULT
+diff --git a/tools/testing/selftests/ftrace/test.d/kprobe/kprobe_args_type.tc b/tools/testing/selftests/ftrace/test.d/kprobe/kprobe_args_type.tc
+index 1bcb67dcae26..81490ecaaa92 100644
+--- a/tools/testing/selftests/ftrace/test.d/kprobe/kprobe_args_type.tc
++++ b/tools/testing/selftests/ftrace/test.d/kprobe/kprobe_args_type.tc
+@@ -38,7 +38,7 @@ for width in 64 32 16 8; do
+   echo 0 > events/kprobes/testprobe/enable
+ 
+   : "Confirm the arguments is recorded in given types correctly"
+-  ARGS=`grep "testprobe" trace | sed -e 's/.* arg1=\(.*\) arg2=\(.*\) arg3=\(.*\) arg4=\(.*\)/\1 \2 \3 \4/'`
++  ARGS=`grep "testprobe" trace | head -n 1 | sed -e 's/.* arg1=\(.*\) arg2=\(.*\) arg3=\(.*\) arg4=\(.*\)/\1 \2 \3 \4/'`
+   check_types $ARGS $width
+ 
+   : "Clear event for next loop"
+diff --git a/virt/kvm/arm/vgic/vgic-mmio-v2.c b/virt/kvm/arm/vgic/vgic-mmio-v2.c
+index 5945f062d749..7b288eb391b8 100644
+--- a/virt/kvm/arm/vgic/vgic-mmio-v2.c
++++ b/virt/kvm/arm/vgic/vgic-mmio-v2.c
+@@ -415,18 +415,20 @@ static const struct vgic_register_region vgic_v2_dist_registers[] = {
+ 		vgic_mmio_read_enable, vgic_mmio_write_cenable, NULL, NULL, 1,
+ 		VGIC_ACCESS_32bit),
+ 	REGISTER_DESC_WITH_BITS_PER_IRQ(GIC_DIST_PENDING_SET,
+-		vgic_mmio_read_pending, vgic_mmio_write_spending, NULL, NULL, 1,
++		vgic_mmio_read_pending, vgic_mmio_write_spending,
++		NULL, vgic_uaccess_write_spending, 1,
+ 		VGIC_ACCESS_32bit),
+ 	REGISTER_DESC_WITH_BITS_PER_IRQ(GIC_DIST_PENDING_CLEAR,
+-		vgic_mmio_read_pending, vgic_mmio_write_cpending, NULL, NULL, 1,
++		vgic_mmio_read_pending, vgic_mmio_write_cpending,
++		NULL, vgic_uaccess_write_cpending, 1,
+ 		VGIC_ACCESS_32bit),
+ 	REGISTER_DESC_WITH_BITS_PER_IRQ(GIC_DIST_ACTIVE_SET,
+ 		vgic_mmio_read_active, vgic_mmio_write_sactive,
+-		NULL, vgic_mmio_uaccess_write_sactive, 1,
++		vgic_uaccess_read_active, vgic_mmio_uaccess_write_sactive, 1,
+ 		VGIC_ACCESS_32bit),
+ 	REGISTER_DESC_WITH_BITS_PER_IRQ(GIC_DIST_ACTIVE_CLEAR,
+ 		vgic_mmio_read_active, vgic_mmio_write_cactive,
+-		NULL, vgic_mmio_uaccess_write_cactive, 1,
++		vgic_uaccess_read_active, vgic_mmio_uaccess_write_cactive, 1,
+ 		VGIC_ACCESS_32bit),
+ 	REGISTER_DESC_WITH_BITS_PER_IRQ(GIC_DIST_PRI,
+ 		vgic_mmio_read_priority, vgic_mmio_write_priority, NULL, NULL,
+diff --git a/virt/kvm/arm/vgic/vgic-mmio-v3.c b/virt/kvm/arm/vgic/vgic-mmio-v3.c
+index ebc218840fc2..b1b066c148ce 100644
+--- a/virt/kvm/arm/vgic/vgic-mmio-v3.c
++++ b/virt/kvm/arm/vgic/vgic-mmio-v3.c
+@@ -494,11 +494,11 @@ static const struct vgic_register_region vgic_v3_dist_registers[] = {
+ 		VGIC_ACCESS_32bit),
+ 	REGISTER_DESC_WITH_BITS_PER_IRQ_SHARED(GICD_ISACTIVER,
+ 		vgic_mmio_read_active, vgic_mmio_write_sactive,
+-		NULL, vgic_mmio_uaccess_write_sactive, 1,
++		vgic_uaccess_read_active, vgic_mmio_uaccess_write_sactive, 1,
+ 		VGIC_ACCESS_32bit),
+ 	REGISTER_DESC_WITH_BITS_PER_IRQ_SHARED(GICD_ICACTIVER,
+ 		vgic_mmio_read_active, vgic_mmio_write_cactive,
+-		NULL, vgic_mmio_uaccess_write_cactive,
++		vgic_uaccess_read_active, vgic_mmio_uaccess_write_cactive,
+ 		1, VGIC_ACCESS_32bit),
+ 	REGISTER_DESC_WITH_BITS_PER_IRQ_SHARED(GICD_IPRIORITYR,
+ 		vgic_mmio_read_priority, vgic_mmio_write_priority, NULL, NULL,
+@@ -566,12 +566,12 @@ static const struct vgic_register_region vgic_v3_rd_registers[] = {
+ 		VGIC_ACCESS_32bit),
+ 	REGISTER_DESC_WITH_LENGTH_UACCESS(SZ_64K + GICR_ISACTIVER0,
+ 		vgic_mmio_read_active, vgic_mmio_write_sactive,
+-		NULL, vgic_mmio_uaccess_write_sactive,
+-		4, VGIC_ACCESS_32bit),
++		vgic_uaccess_read_active, vgic_mmio_uaccess_write_sactive, 4,
++		VGIC_ACCESS_32bit),
+ 	REGISTER_DESC_WITH_LENGTH_UACCESS(SZ_64K + GICR_ICACTIVER0,
+ 		vgic_mmio_read_active, vgic_mmio_write_cactive,
+-		NULL, vgic_mmio_uaccess_write_cactive,
+-		4, VGIC_ACCESS_32bit),
++		vgic_uaccess_read_active, vgic_mmio_uaccess_write_cactive, 4,
++		VGIC_ACCESS_32bit),
+ 	REGISTER_DESC_WITH_LENGTH(SZ_64K + GICR_IPRIORITYR0,
+ 		vgic_mmio_read_priority, vgic_mmio_write_priority, 32,
+ 		VGIC_ACCESS_32bit | VGIC_ACCESS_8bit),
+diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c
+index e7abd05ea896..b6824bba8248 100644
+--- a/virt/kvm/arm/vgic/vgic-mmio.c
++++ b/virt/kvm/arm/vgic/vgic-mmio.c
+@@ -179,17 +179,6 @@ unsigned long vgic_mmio_read_pending(struct kvm_vcpu *vcpu,
+ 	return value;
+ }
+ 
+-/* Must be called with irq->irq_lock held */
+-static void vgic_hw_irq_spending(struct kvm_vcpu *vcpu, struct vgic_irq *irq,
+-				 bool is_uaccess)
+-{
+-	if (is_uaccess)
+-		return;
+-
+-	irq->pending_latch = true;
+-	vgic_irq_set_phys_active(irq, true);
+-}
+-
+ static bool is_vgic_v2_sgi(struct kvm_vcpu *vcpu, struct vgic_irq *irq)
+ {
+ 	return (vgic_irq_is_sgi(irq->intid) &&
+@@ -200,7 +189,6 @@ void vgic_mmio_write_spending(struct kvm_vcpu *vcpu,
+ 			      gpa_t addr, unsigned int len,
+ 			      unsigned long val)
+ {
+-	bool is_uaccess = !kvm_get_running_vcpu();
+ 	u32 intid = VGIC_ADDR_TO_INTID(addr, 1);
+ 	int i;
+ 	unsigned long flags;
+@@ -215,22 +203,49 @@ void vgic_mmio_write_spending(struct kvm_vcpu *vcpu,
+ 		}
+ 
+ 		raw_spin_lock_irqsave(&irq->irq_lock, flags);
++
++		irq->pending_latch = true;
+ 		if (irq->hw)
+-			vgic_hw_irq_spending(vcpu, irq, is_uaccess);
+-		else
+-			irq->pending_latch = true;
++			vgic_irq_set_phys_active(irq, true);
++
+ 		vgic_queue_irq_unlock(vcpu->kvm, irq, flags);
+ 		vgic_put_irq(vcpu->kvm, irq);
+ 	}
+ }
+ 
+-/* Must be called with irq->irq_lock held */
+-static void vgic_hw_irq_cpending(struct kvm_vcpu *vcpu, struct vgic_irq *irq,
+-				 bool is_uaccess)
++int vgic_uaccess_write_spending(struct kvm_vcpu *vcpu,
++				gpa_t addr, unsigned int len,
++				unsigned long val)
+ {
+-	if (is_uaccess)
+-		return;
++	u32 intid = VGIC_ADDR_TO_INTID(addr, 1);
++	int i;
++	unsigned long flags;
++
++	for_each_set_bit(i, &val, len * 8) {
++		struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, intid + i);
++
++		raw_spin_lock_irqsave(&irq->irq_lock, flags);
++		irq->pending_latch = true;
++
++		/*
++		 * GICv2 SGIs are terribly broken. We can't restore
++		 * the source of the interrupt, so just pick the vcpu
++		 * itself as the source...
++		 */
++		if (is_vgic_v2_sgi(vcpu, irq))
++			irq->source |= BIT(vcpu->vcpu_id);
++
++		vgic_queue_irq_unlock(vcpu->kvm, irq, flags);
++
++		vgic_put_irq(vcpu->kvm, irq);
++	}
+ 
++	return 0;
++}
++
++/* Must be called with irq->irq_lock held */
++static void vgic_hw_irq_cpending(struct kvm_vcpu *vcpu, struct vgic_irq *irq)
++{
+ 	irq->pending_latch = false;
+ 
+ 	/*
+@@ -253,7 +268,6 @@ void vgic_mmio_write_cpending(struct kvm_vcpu *vcpu,
+ 			      gpa_t addr, unsigned int len,
+ 			      unsigned long val)
+ {
+-	bool is_uaccess = !kvm_get_running_vcpu();
+ 	u32 intid = VGIC_ADDR_TO_INTID(addr, 1);
+ 	int i;
+ 	unsigned long flags;
+@@ -270,7 +284,7 @@ void vgic_mmio_write_cpending(struct kvm_vcpu *vcpu,
+ 		raw_spin_lock_irqsave(&irq->irq_lock, flags);
+ 
+ 		if (irq->hw)
+-			vgic_hw_irq_cpending(vcpu, irq, is_uaccess);
++			vgic_hw_irq_cpending(vcpu, irq);
+ 		else
+ 			irq->pending_latch = false;
+ 
+@@ -279,8 +293,68 @@ void vgic_mmio_write_cpending(struct kvm_vcpu *vcpu,
+ 	}
+ }
+ 
+-unsigned long vgic_mmio_read_active(struct kvm_vcpu *vcpu,
+-				    gpa_t addr, unsigned int len)
++int vgic_uaccess_write_cpending(struct kvm_vcpu *vcpu,
++				gpa_t addr, unsigned int len,
++				unsigned long val)
++{
++	u32 intid = VGIC_ADDR_TO_INTID(addr, 1);
++	int i;
++	unsigned long flags;
++
++	for_each_set_bit(i, &val, len * 8) {
++		struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, intid + i);
++
++		raw_spin_lock_irqsave(&irq->irq_lock, flags);
++		/*
++		 * More fun with GICv2 SGIs! If we're clearing one of them
++		 * from userspace, which source vcpu to clear? Let's not
++		 * even think of it, and blow the whole set.
++		 */
++		if (is_vgic_v2_sgi(vcpu, irq))
++			irq->source = 0;
++
++		irq->pending_latch = false;
++
++		raw_spin_unlock_irqrestore(&irq->irq_lock, flags);
++
++		vgic_put_irq(vcpu->kvm, irq);
++	}
++
++	return 0;
++}
++
++/*
++ * If we are fiddling with an IRQ's active state, we have to make sure the IRQ
++ * is not queued on some running VCPU's LRs, because then the change to the
++ * active state can be overwritten when the VCPU's state is synced coming back
++ * from the guest.
++ *
++ * For shared interrupts as well as GICv3 private interrupts, we have to
++ * stop all the VCPUs because interrupts can be migrated while we don't hold
++ * the IRQ locks and we don't want to be chasing moving targets.
++ *
++ * For GICv2 private interrupts we don't have to do anything because
++ * userspace accesses to the VGIC state already require all VCPUs to be
++ * stopped, and only the VCPU itself can modify its private interrupts
++ * active state, which guarantees that the VCPU is not running.
++ */
++static void vgic_access_active_prepare(struct kvm_vcpu *vcpu, u32 intid)
++{
++	if (vcpu->kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3 ||
++	    intid >= VGIC_NR_PRIVATE_IRQS)
++		kvm_arm_halt_guest(vcpu->kvm);
++}
++
++/* See vgic_access_active_prepare */
++static void vgic_access_active_finish(struct kvm_vcpu *vcpu, u32 intid)
++{
++	if (vcpu->kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3 ||
++	    intid >= VGIC_NR_PRIVATE_IRQS)
++		kvm_arm_resume_guest(vcpu->kvm);
++}
++
++static unsigned long __vgic_mmio_read_active(struct kvm_vcpu *vcpu,
++					     gpa_t addr, unsigned int len)
+ {
+ 	u32 intid = VGIC_ADDR_TO_INTID(addr, 1);
+ 	u32 value = 0;
+@@ -290,6 +364,10 @@ unsigned long vgic_mmio_read_active(struct kvm_vcpu *vcpu,
+ 	for (i = 0; i < len * 8; i++) {
+ 		struct vgic_irq *irq = vgic_get_irq(vcpu->kvm, vcpu, intid + i);
+ 
++		/*
++		 * Even for HW interrupts, don't evaluate the HW state as
++		 * all the guest is interested in is the virtual state.
++		 */
+ 		if (irq->active)
+ 			value |= (1U << i);
+ 
+@@ -299,6 +377,29 @@ unsigned long vgic_mmio_read_active(struct kvm_vcpu *vcpu,
+ 	return value;
+ }
+ 
++unsigned long vgic_mmio_read_active(struct kvm_vcpu *vcpu,
++				    gpa_t addr, unsigned int len)
++{
++	u32 intid = VGIC_ADDR_TO_INTID(addr, 1);
++	u32 val;
++
++	mutex_lock(&vcpu->kvm->lock);
++	vgic_access_active_prepare(vcpu, intid);
++
++	val = __vgic_mmio_read_active(vcpu, addr, len);
++
++	vgic_access_active_finish(vcpu, intid);
++	mutex_unlock(&vcpu->kvm->lock);
++
++	return val;
++}
++
++unsigned long vgic_uaccess_read_active(struct kvm_vcpu *vcpu,
++				    gpa_t addr, unsigned int len)
++{
++	return __vgic_mmio_read_active(vcpu, addr, len);
++}
++
+ /* Must be called with irq->irq_lock held */
+ static void vgic_hw_irq_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq,
+ 				      bool active, bool is_uaccess)
+@@ -350,36 +451,6 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq,
+ 		raw_spin_unlock_irqrestore(&irq->irq_lock, flags);
+ }
+ 
+-/*
+- * If we are fiddling with an IRQ's active state, we have to make sure the IRQ
+- * is not queued on some running VCPU's LRs, because then the change to the
+- * active state can be overwritten when the VCPU's state is synced coming back
+- * from the guest.
+- *
+- * For shared interrupts, we have to stop all the VCPUs because interrupts can
+- * be migrated while we don't hold the IRQ locks and we don't want to be
+- * chasing moving targets.
+- *
+- * For private interrupts we don't have to do anything because userspace
+- * accesses to the VGIC state already require all VCPUs to be stopped, and
+- * only the VCPU itself can modify its private interrupts active state, which
+- * guarantees that the VCPU is not running.
+- */
+-static void vgic_change_active_prepare(struct kvm_vcpu *vcpu, u32 intid)
+-{
+-	if (vcpu->kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3 ||
+-	    intid >= VGIC_NR_PRIVATE_IRQS)
+-		kvm_arm_halt_guest(vcpu->kvm);
+-}
+-
+-/* See vgic_change_active_prepare */
+-static void vgic_change_active_finish(struct kvm_vcpu *vcpu, u32 intid)
+-{
+-	if (vcpu->kvm->arch.vgic.vgic_model == KVM_DEV_TYPE_ARM_VGIC_V3 ||
+-	    intid >= VGIC_NR_PRIVATE_IRQS)
+-		kvm_arm_resume_guest(vcpu->kvm);
+-}
+-
+ static void __vgic_mmio_write_cactive(struct kvm_vcpu *vcpu,
+ 				      gpa_t addr, unsigned int len,
+ 				      unsigned long val)
+@@ -401,11 +472,11 @@ void vgic_mmio_write_cactive(struct kvm_vcpu *vcpu,
+ 	u32 intid = VGIC_ADDR_TO_INTID(addr, 1);
+ 
+ 	mutex_lock(&vcpu->kvm->lock);
+-	vgic_change_active_prepare(vcpu, intid);
++	vgic_access_active_prepare(vcpu, intid);
+ 
+ 	__vgic_mmio_write_cactive(vcpu, addr, len, val);
+ 
+-	vgic_change_active_finish(vcpu, intid);
++	vgic_access_active_finish(vcpu, intid);
+ 	mutex_unlock(&vcpu->kvm->lock);
+ }
+ 
+@@ -438,11 +509,11 @@ void vgic_mmio_write_sactive(struct kvm_vcpu *vcpu,
+ 	u32 intid = VGIC_ADDR_TO_INTID(addr, 1);
+ 
+ 	mutex_lock(&vcpu->kvm->lock);
+-	vgic_change_active_prepare(vcpu, intid);
++	vgic_access_active_prepare(vcpu, intid);
+ 
+ 	__vgic_mmio_write_sactive(vcpu, addr, len, val);
+ 
+-	vgic_change_active_finish(vcpu, intid);
++	vgic_access_active_finish(vcpu, intid);
+ 	mutex_unlock(&vcpu->kvm->lock);
+ }
+ 
+diff --git a/virt/kvm/arm/vgic/vgic-mmio.h b/virt/kvm/arm/vgic/vgic-mmio.h
+index 5af2aefad435..b127f889113e 100644
+--- a/virt/kvm/arm/vgic/vgic-mmio.h
++++ b/virt/kvm/arm/vgic/vgic-mmio.h
+@@ -149,9 +149,20 @@ void vgic_mmio_write_cpending(struct kvm_vcpu *vcpu,
+ 			      gpa_t addr, unsigned int len,
+ 			      unsigned long val);
+ 
++int vgic_uaccess_write_spending(struct kvm_vcpu *vcpu,
++				gpa_t addr, unsigned int len,
++				unsigned long val);
++
++int vgic_uaccess_write_cpending(struct kvm_vcpu *vcpu,
++				gpa_t addr, unsigned int len,
++				unsigned long val);
++
+ unsigned long vgic_mmio_read_active(struct kvm_vcpu *vcpu,
+ 				    gpa_t addr, unsigned int len);
+ 
++unsigned long vgic_uaccess_read_active(struct kvm_vcpu *vcpu,
++				    gpa_t addr, unsigned int len);
++
+ void vgic_mmio_write_cactive(struct kvm_vcpu *vcpu,
+ 			     gpa_t addr, unsigned int len,
+ 			     unsigned long val);

diff --git a/1700_x86-gcc-10-early-boot-crash-fix.patch b/1700_x86-gcc-10-early-boot-crash-fix.patch
deleted file mode 100644
index 8cdf651..0000000
--- a/1700_x86-gcc-10-early-boot-crash-fix.patch
+++ /dev/null
@@ -1,131 +0,0 @@
-From f670269a42bfdd2c83a1118cc3d1b475547eac22 Mon Sep 17 00:00:00 2001
-From: Borislav Petkov <bp@suse.de>
-Date: Wed, 22 Apr 2020 18:11:30 +0200
-Subject: x86: Fix early boot crash on gcc-10, next try
-MIME-Version: 1.0
-Content-Type: text/plain; charset=UTF-8
-Content-Transfer-Encoding: 8bit
-
-... or the odyssey of trying to disable the stack protector for the
-function which generates the stack canary value.
-
-The whole story started with Sergei reporting a boot crash with a kernel
-built with gcc-10:
-
-  Kernel panic — not syncing: stack-protector: Kernel stack is corrupted in: start_secondary
-  CPU: 1 PID: 0 Comm: swapper/1 Not tainted 5.6.0-rc5—00235—gfffb08b37df9 #139
-  Hardware name: Gigabyte Technology Co., Ltd. To be filled by O.E.M./H77M—D3H, BIOS F12 11/14/2013
-  Call Trace:
-    dump_stack
-    panic
-    ? start_secondary
-    __stack_chk_fail
-    start_secondary
-    secondary_startup_64
-  -—-[ end Kernel panic — not syncing: stack—protector: Kernel stack is corrupted in: start_secondary
-
-This happens because gcc-10 tail-call optimizes the last function call
-in start_secondary() - cpu_startup_entry() - and thus emits a stack
-canary check which fails because the canary value changes after the
-boot_init_stack_canary() call.
-
-To fix that, the initial attempt was to mark the one function which
-generates the stack canary with:
-
-  __attribute__((optimize("-fno-stack-protector"))) ... start_secondary(void *unused)
-
-however, using the optimize attribute doesn't work cumulatively
-as the attribute does not add to but rather replaces previously
-supplied optimization options - roughly all -fxxx options.
-
-The key one among them being -fno-omit-frame-pointer and thus leading to
-not present frame pointer - frame pointer which the kernel needs.
-
-The next attempt to prevent compilers from tail-call optimizing
-the last function call cpu_startup_entry(), shy of carving out
-start_secondary() into a separate compilation unit and building it with
--fno-stack-protector, is this one.
-
-The current solution is short and sweet, and reportedly, is supported by
-both compilers so let's see how far we'll get this time.
-
-Reported-by: Sergei Trofimovich <slyfox@gentoo.org>
-Signed-off-by: Borislav Petkov <bp@suse.de>
-Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
-Reviewed-by: Kees Cook <keescook@chromium.org>
-Link: https://lkml.kernel.org/r/20200314164451.346497-1-slyfox@gentoo.org
----
- arch/x86/include/asm/stackprotector.h | 7 ++++++-
- arch/x86/kernel/smpboot.c             | 8 ++++++++
- arch/x86/xen/smp_pv.c                 | 1 +
- include/linux/compiler.h              | 6 ++++++
- 4 files changed, 21 insertions(+), 1 deletion(-)
-
-diff --git a/arch/x86/include/asm/stackprotector.h b/arch/x86/include/asm/stackprotector.h
-index 91e29b6a86a5..9804a7957f4e 100644
---- a/arch/x86/include/asm/stackprotector.h
-+++ b/arch/x86/include/asm/stackprotector.h
-@@ -55,8 +55,13 @@
- /*
-  * Initialize the stackprotector canary value.
-  *
-- * NOTE: this must only be called from functions that never return,
-+ * NOTE: this must only be called from functions that never return
-  * and it must always be inlined.
-+ *
-+ * In addition, it should be called from a compilation unit for which
-+ * stack protector is disabled. Alternatively, the caller should not end
-+ * with a function call which gets tail-call optimized as that would
-+ * lead to checking a modified canary value.
-  */
- static __always_inline void boot_init_stack_canary(void)
- {
-diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
-index fe3ab9632f3b..4f275ac7830b 100644
---- a/arch/x86/kernel/smpboot.c
-+++ b/arch/x86/kernel/smpboot.c
-@@ -266,6 +266,14 @@ static void notrace start_secondary(void *unused)
- 
- 	wmb();
- 	cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
-+
-+	/*
-+	 * Prevent tail call to cpu_startup_entry() because the stack protector
-+	 * guard has been changed a couple of function calls up, in
-+	 * boot_init_stack_canary() and must not be checked before tail calling
-+	 * another function.
-+	 */
-+	prevent_tail_call_optimization();
- }
- 
- /**
-diff --git a/arch/x86/xen/smp_pv.c b/arch/x86/xen/smp_pv.c
-index 8fb8a50a28b4..f2adb63b2d7c 100644
---- a/arch/x86/xen/smp_pv.c
-+++ b/arch/x86/xen/smp_pv.c
-@@ -93,6 +93,7 @@ asmlinkage __visible void cpu_bringup_and_idle(void)
- 	cpu_bringup();
- 	boot_init_stack_canary();
- 	cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
-+	prevent_tail_call_optimization();
- }
- 
- void xen_smp_intr_free_pv(unsigned int cpu)
-diff --git a/include/linux/compiler.h b/include/linux/compiler.h
-index 034b0a644efc..732754d96039 100644
---- a/include/linux/compiler.h
-+++ b/include/linux/compiler.h
-@@ -356,4 +356,10 @@ static inline void *offset_to_ptr(const int *off)
- /* &a[0] degrades to a pointer: a different type from an array */
- #define __must_be_array(a)	BUILD_BUG_ON_ZERO(__same_type((a), &(a)[0]))
- 
-+/*
-+ * This is needed in functions which generate the stack canary, see
-+ * arch/x86/kernel/smpboot.c::start_secondary() for an example.
-+ */
-+#define prevent_tail_call_optimization()	asm("")
-+
- #endif /* __LINUX_COMPILER_H */
--- 
-cgit 1.2-0.3.lf.el7
-


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [gentoo-commits] proj/linux-patches:5.6 commit in: /
@ 2020-05-20 23:13 Mike Pagano
  0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2020-05-20 23:13 UTC (permalink / raw
  To: gentoo-commits

commit:     dee616e55bf3f2ced4f2f4688df60626ed2f6a29
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed May 20 23:10:07 2020 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed May 20 23:10:07 2020 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=dee616e5

sign-file: full functionality with modern LibreSSL

Bug: https://bugs.gentoo.org/717166

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README                             |  4 ++++
 2920_sign-file-patch-for-libressl.patch | 16 ++++++++++++++++
 2 files changed, 20 insertions(+)

diff --git a/0000_README b/0000_README
index 3a37e9d..50aaa31 100644
--- a/0000_README
+++ b/0000_README
@@ -127,6 +127,10 @@ Patch:  2910_TVP5150-Fix-build-issue-by-selecting-REGMAP-I2C.patch
 From:   https://bugs.gentoo.org/721096
 Desc:   VIDEO_TVP515 requies REGMAP_I2C to build.  Select it by default in Kconfig. See bug #721096. Thanks to Max Steel
 
+Patch:  2920_sign-file-patch-for-libressl.patch
+From:   https://bugs.gentoo.org/717166
+Desc:   sign-file: full functionality with modern LibreSSL
+
 Patch:  4567_distro-Gentoo-Kconfig.patch
 From:   Tom Wijsman <TomWij@gentoo.org>
 Desc:   Add Gentoo Linux support config settings and defaults.

diff --git a/2920_sign-file-patch-for-libressl.patch b/2920_sign-file-patch-for-libressl.patch
new file mode 100644
index 0000000..e6ec017
--- /dev/null
+++ b/2920_sign-file-patch-for-libressl.patch
@@ -0,0 +1,16 @@
+--- a/scripts/sign-file.c	2020-05-20 18:47:21.282820662 -0400
++++ b/scripts/sign-file.c	2020-05-20 18:48:37.991081899 -0400
+@@ -41,9 +41,10 @@
+  * signing with anything other than SHA1 - so we're stuck with that if such is
+  * the case.
+  */
+-#if defined(LIBRESSL_VERSION_NUMBER) || \
+-	OPENSSL_VERSION_NUMBER < 0x10000000L || \
+-	defined(OPENSSL_NO_CMS)
++#if defined(OPENSSL_NO_CMS) || \
++	( defined(LIBRESSL_VERSION_NUMBER) \
++	&& (LIBRESSL_VERSION_NUMBER < 0x3010000fL) ) || \
++	OPENSSL_VERSION_NUMBER < 0x10000000L
+ #define USE_PKCS7
+ #endif
+ #ifndef USE_PKCS7


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [gentoo-commits] proj/linux-patches:5.6 commit in: /
@ 2020-05-27 16:32 Mike Pagano
  0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2020-05-27 16:32 UTC (permalink / raw
  To: gentoo-commits

commit:     4d328115ad37428d4e514eb35f786e11283b745d
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed May 27 16:32:16 2020 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed May 27 16:32:16 2020 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=4d328115

Linux patch 5.6.15

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +-
 1014_linux-5.6.15.patch | 4946 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 4948 insertions(+), 2 deletions(-)

diff --git a/0000_README b/0000_README
index 50aaa31..1c0ea04 100644
--- a/0000_README
+++ b/0000_README
@@ -99,9 +99,9 @@ Patch:  1013_linux-5.6.14.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.6.14
 
-Patch:  1013_linux-5.6.14.patch
+Patch:  1014_linux-5.6.15.patch
 From:   http://www.kernel.org
-Desc:   Linux 5.6.14
+Desc:   Linux 5.6.15
 
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644

diff --git a/1014_linux-5.6.15.patch b/1014_linux-5.6.15.patch
new file mode 100644
index 0000000..a62c0e3
--- /dev/null
+++ b/1014_linux-5.6.15.patch
@@ -0,0 +1,4946 @@
+diff --git a/Makefile b/Makefile
+index 713f93cceffe..3eca0c523098 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 6
+-SUBLEVEL = 14
++SUBLEVEL = 15
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+ 
+@@ -1248,11 +1248,15 @@ ifneq ($(dtstree),)
+ 	$(Q)$(MAKE) $(build)=$(dtstree) $(dtstree)/$@
+ 
+ PHONY += dtbs dtbs_install dtbs_check
+-dtbs dtbs_check: include/config/kernel.release scripts_dtc
++dtbs: include/config/kernel.release scripts_dtc
+ 	$(Q)$(MAKE) $(build)=$(dtstree)
+ 
++ifneq ($(filter dtbs_check, $(MAKECMDGOALS)),)
++dtbs: dt_binding_check
++endif
++
+ dtbs_check: export CHECK_DTBS=1
+-dtbs_check: dt_binding_check
++dtbs_check: dtbs
+ 
+ dtbs_install:
+ 	$(Q)$(MAKE) $(dtbinst)=$(dtstree)
+diff --git a/arch/arc/configs/hsdk_defconfig b/arch/arc/configs/hsdk_defconfig
+index 0974226fab55..aa000075a575 100644
+--- a/arch/arc/configs/hsdk_defconfig
++++ b/arch/arc/configs/hsdk_defconfig
+@@ -65,6 +65,7 @@ CONFIG_DRM_UDL=y
+ CONFIG_DRM_ETNAVIV=y
+ CONFIG_FB=y
+ CONFIG_FRAMEBUFFER_CONSOLE=y
++CONFIG_USB=y
+ CONFIG_USB_EHCI_HCD=y
+ CONFIG_USB_EHCI_HCD_PLATFORM=y
+ CONFIG_USB_OHCI_HCD=y
+diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
+index 97864aabc2a6..579f7eb6968a 100644
+--- a/arch/arm/Kconfig
++++ b/arch/arm/Kconfig
+@@ -13,6 +13,7 @@ config ARM
+ 	select ARCH_HAS_KEEPINITRD
+ 	select ARCH_HAS_KCOV
+ 	select ARCH_HAS_MEMBARRIER_SYNC_CORE
++	select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE
+ 	select ARCH_HAS_PTE_SPECIAL if ARM_LPAE
+ 	select ARCH_HAS_PHYS_TO_DMA
+ 	select ARCH_HAS_SETUP_DMA_OPS
+diff --git a/arch/arm/include/asm/futex.h b/arch/arm/include/asm/futex.h
+index 83c391b597d4..fdc4ae3e7378 100644
+--- a/arch/arm/include/asm/futex.h
++++ b/arch/arm/include/asm/futex.h
+@@ -164,8 +164,13 @@ arch_futex_atomic_op_inuser(int op, int oparg, int *oval, u32 __user *uaddr)
+ 	preempt_enable();
+ #endif
+ 
+-	if (!ret)
+-		*oval = oldval;
++	/*
++	 * Store unconditionally. If ret != 0 the extra store is the least
++	 * of the worries but GCC cannot figure out that __futex_atomic_op()
++	 * is either setting ret to -EFAULT or storing the old value in
++	 * oldval which results in a uninitialized warning at the call site.
++	 */
++	*oval = oldval;
+ 
+ 	return ret;
+ }
+diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
+index 0b30e884e088..84e1f0a43cdb 100644
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -21,6 +21,7 @@ config ARM64
+ 	select ARCH_HAS_KCOV
+ 	select ARCH_HAS_KEEPINITRD
+ 	select ARCH_HAS_MEMBARRIER_SYNC_CORE
++	select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE
+ 	select ARCH_HAS_PTE_DEVMAP
+ 	select ARCH_HAS_PTE_SPECIAL
+ 	select ARCH_HAS_SETUP_DMA_OPS
+diff --git a/arch/arm64/kernel/ptrace.c b/arch/arm64/kernel/ptrace.c
+index cd6e5fa48b9c..c30f77bd875f 100644
+--- a/arch/arm64/kernel/ptrace.c
++++ b/arch/arm64/kernel/ptrace.c
+@@ -1829,10 +1829,11 @@ static void tracehook_report_syscall(struct pt_regs *regs,
+ 
+ int syscall_trace_enter(struct pt_regs *regs)
+ {
+-	if (test_thread_flag(TIF_SYSCALL_TRACE) ||
+-		test_thread_flag(TIF_SYSCALL_EMU)) {
++	unsigned long flags = READ_ONCE(current_thread_info()->flags);
++
++	if (flags & (_TIF_SYSCALL_EMU | _TIF_SYSCALL_TRACE)) {
+ 		tracehook_report_syscall(regs, PTRACE_SYSCALL_ENTER);
+-		if (!in_syscall(regs) || test_thread_flag(TIF_SYSCALL_EMU))
++		if (!in_syscall(regs) || (flags & _TIF_SYSCALL_EMU))
+ 			return -1;
+ 	}
+ 
+diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
+index 497b7d0b2d7e..b0fb42b0bf4b 100644
+--- a/arch/powerpc/Kconfig
++++ b/arch/powerpc/Kconfig
+@@ -129,7 +129,7 @@ config PPC
+ 	select ARCH_HAS_PTE_SPECIAL
+ 	select ARCH_HAS_MEMBARRIER_CALLBACKS
+ 	select ARCH_HAS_SCALED_CPUTIME		if VIRT_CPU_ACCOUNTING_NATIVE && PPC_BOOK3S_64
+-	select ARCH_HAS_STRICT_KERNEL_RWX	if ((PPC_BOOK3S_64 || PPC32) && !HIBERNATION)
++	select ARCH_HAS_STRICT_KERNEL_RWX	if (PPC32 && !HIBERNATION)
+ 	select ARCH_HAS_TICK_BROADCAST		if GENERIC_CLOCKEVENTS_BROADCAST
+ 	select ARCH_HAS_UACCESS_FLUSHCACHE
+ 	select ARCH_HAS_UACCESS_MCSAFE		if PPC64
+diff --git a/arch/s390/include/asm/pci_io.h b/arch/s390/include/asm/pci_io.h
+index cd060b5dd8fd..e4dc64cc9c55 100644
+--- a/arch/s390/include/asm/pci_io.h
++++ b/arch/s390/include/asm/pci_io.h
+@@ -8,6 +8,10 @@
+ #include <linux/slab.h>
+ #include <asm/pci_insn.h>
+ 
++/* I/O size constraints */
++#define ZPCI_MAX_READ_SIZE	8
++#define ZPCI_MAX_WRITE_SIZE	128
++
+ /* I/O Map */
+ #define ZPCI_IOMAP_SHIFT		48
+ #define ZPCI_IOMAP_ADDR_BASE		0x8000000000000000UL
+@@ -140,7 +144,8 @@ static inline int zpci_memcpy_fromio(void *dst,
+ 
+ 	while (n > 0) {
+ 		size = zpci_get_max_write_size((u64 __force) src,
+-					       (u64) dst, n, 8);
++					       (u64) dst, n,
++					       ZPCI_MAX_READ_SIZE);
+ 		rc = zpci_read_single(dst, src, size);
+ 		if (rc)
+ 			break;
+@@ -161,7 +166,8 @@ static inline int zpci_memcpy_toio(volatile void __iomem *dst,
+ 
+ 	while (n > 0) {
+ 		size = zpci_get_max_write_size((u64 __force) dst,
+-					       (u64) src, n, 128);
++					       (u64) src, n,
++					       ZPCI_MAX_WRITE_SIZE);
+ 		if (size > 8) /* main path */
+ 			rc = zpci_write_block(dst, src, size);
+ 		else
+diff --git a/arch/s390/kernel/machine_kexec_file.c b/arch/s390/kernel/machine_kexec_file.c
+index 8415ae7d2a23..f9e4baa64b67 100644
+--- a/arch/s390/kernel/machine_kexec_file.c
++++ b/arch/s390/kernel/machine_kexec_file.c
+@@ -151,7 +151,7 @@ static int kexec_file_add_initrd(struct kimage *image,
+ 		buf.mem += crashk_res.start;
+ 	buf.memsz = buf.bufsz;
+ 
+-	data->parm->initrd_start = buf.mem;
++	data->parm->initrd_start = data->memsz;
+ 	data->parm->initrd_size = buf.memsz;
+ 	data->memsz += buf.memsz;
+ 
+diff --git a/arch/s390/kernel/machine_kexec_reloc.c b/arch/s390/kernel/machine_kexec_reloc.c
+index d5035de9020e..b7182cec48dc 100644
+--- a/arch/s390/kernel/machine_kexec_reloc.c
++++ b/arch/s390/kernel/machine_kexec_reloc.c
+@@ -28,6 +28,7 @@ int arch_kexec_do_relocs(int r_type, void *loc, unsigned long val,
+ 		break;
+ 	case R_390_64:		/* Direct 64 bit.  */
+ 	case R_390_GLOB_DAT:
++	case R_390_JMP_SLOT:
+ 		*(u64 *)loc = val;
+ 		break;
+ 	case R_390_PC16:	/* PC relative 16 bit.	*/
+diff --git a/arch/s390/pci/pci_mmio.c b/arch/s390/pci/pci_mmio.c
+index 7d42a8794f10..020a2c514d96 100644
+--- a/arch/s390/pci/pci_mmio.c
++++ b/arch/s390/pci/pci_mmio.c
+@@ -11,6 +11,113 @@
+ #include <linux/mm.h>
+ #include <linux/errno.h>
+ #include <linux/pci.h>
++#include <asm/pci_io.h>
++#include <asm/pci_debug.h>
++
++static inline void zpci_err_mmio(u8 cc, u8 status, u64 offset)
++{
++	struct {
++		u64 offset;
++		u8 cc;
++		u8 status;
++	} data = {offset, cc, status};
++
++	zpci_err_hex(&data, sizeof(data));
++}
++
++static inline int __pcistb_mio_inuser(
++		void __iomem *ioaddr, const void __user *src,
++		u64 len, u8 *status)
++{
++	int cc = -ENXIO;
++
++	asm volatile (
++		"       sacf 256\n"
++		"0:     .insn   rsy,0xeb00000000d4,%[len],%[ioaddr],%[src]\n"
++		"1:     ipm     %[cc]\n"
++		"       srl     %[cc],28\n"
++		"2:     sacf 768\n"
++		EX_TABLE(0b, 2b) EX_TABLE(1b, 2b)
++		: [cc] "+d" (cc), [len] "+d" (len)
++		: [ioaddr] "a" (ioaddr), [src] "Q" (*((u8 __force *)src))
++		: "cc", "memory");
++	*status = len >> 24 & 0xff;
++	return cc;
++}
++
++static inline int __pcistg_mio_inuser(
++		void __iomem *ioaddr, const void __user *src,
++		u64 ulen, u8 *status)
++{
++	register u64 addr asm("2") = (u64 __force) ioaddr;
++	register u64 len asm("3") = ulen;
++	int cc = -ENXIO;
++	u64 val = 0;
++	u64 cnt = ulen;
++	u8 tmp;
++
++	/*
++	 * copy 0 < @len <= 8 bytes from @src into the right most bytes of
++	 * a register, then store it to PCI at @ioaddr while in secondary
++	 * address space. pcistg then uses the user mappings.
++	 */
++	asm volatile (
++		"       sacf    256\n"
++		"0:     llgc    %[tmp],0(%[src])\n"
++		"       sllg    %[val],%[val],8\n"
++		"       aghi    %[src],1\n"
++		"       ogr     %[val],%[tmp]\n"
++		"       brctg   %[cnt],0b\n"
++		"1:     .insn   rre,0xb9d40000,%[val],%[ioaddr]\n"
++		"2:     ipm     %[cc]\n"
++		"       srl     %[cc],28\n"
++		"3:     sacf    768\n"
++		EX_TABLE(0b, 3b) EX_TABLE(1b, 3b) EX_TABLE(2b, 3b)
++		:
++		[src] "+a" (src), [cnt] "+d" (cnt),
++		[val] "+d" (val), [tmp] "=d" (tmp),
++		[len] "+d" (len), [cc] "+d" (cc),
++		[ioaddr] "+a" (addr)
++		:: "cc", "memory");
++	*status = len >> 24 & 0xff;
++
++	/* did we read everything from user memory? */
++	if (!cc && cnt != 0)
++		cc = -EFAULT;
++
++	return cc;
++}
++
++static inline int __memcpy_toio_inuser(void __iomem *dst,
++				   const void __user *src, size_t n)
++{
++	int size, rc = 0;
++	u8 status = 0;
++	mm_segment_t old_fs;
++
++	if (!src)
++		return -EINVAL;
++
++	old_fs = enable_sacf_uaccess();
++	while (n > 0) {
++		size = zpci_get_max_write_size((u64 __force) dst,
++					       (u64 __force) src, n,
++					       ZPCI_MAX_WRITE_SIZE);
++		if (size > 8) /* main path */
++			rc = __pcistb_mio_inuser(dst, src, size, &status);
++		else
++			rc = __pcistg_mio_inuser(dst, src, size, &status);
++		if (rc)
++			break;
++		src += size;
++		dst += size;
++		n -= size;
++	}
++	disable_sacf_uaccess(old_fs);
++	if (rc)
++		zpci_err_mmio(rc, status, (__force u64) dst);
++	return rc;
++}
+ 
+ static long get_pfn(unsigned long user_addr, unsigned long access,
+ 		    unsigned long *pfn)
+@@ -46,6 +153,20 @@ SYSCALL_DEFINE3(s390_pci_mmio_write, unsigned long, mmio_addr,
+ 
+ 	if (length <= 0 || PAGE_SIZE - (mmio_addr & ~PAGE_MASK) < length)
+ 		return -EINVAL;
++
++	/*
++	 * Only support read access to MIO capable devices on a MIO enabled
++	 * system. Otherwise we would have to check for every address if it is
++	 * a special ZPCI_ADDR and we would have to do a get_pfn() which we
++	 * don't need for MIO capable devices.
++	 */
++	if (static_branch_likely(&have_mio)) {
++		ret = __memcpy_toio_inuser((void  __iomem *) mmio_addr,
++					user_buffer,
++					length);
++		return ret;
++	}
++
+ 	if (length > 64) {
+ 		buf = kmalloc(length, GFP_KERNEL);
+ 		if (!buf)
+@@ -56,7 +177,8 @@ SYSCALL_DEFINE3(s390_pci_mmio_write, unsigned long, mmio_addr,
+ 	ret = get_pfn(mmio_addr, VM_WRITE, &pfn);
+ 	if (ret)
+ 		goto out;
+-	io_addr = (void __iomem *)((pfn << PAGE_SHIFT) | (mmio_addr & ~PAGE_MASK));
++	io_addr = (void __iomem *)((pfn << PAGE_SHIFT) |
++			(mmio_addr & ~PAGE_MASK));
+ 
+ 	ret = -EFAULT;
+ 	if ((unsigned long) io_addr < ZPCI_IOMAP_ADDR_BASE)
+@@ -72,6 +194,78 @@ out:
+ 	return ret;
+ }
+ 
++static inline int __pcilg_mio_inuser(
++		void __user *dst, const void __iomem *ioaddr,
++		u64 ulen, u8 *status)
++{
++	register u64 addr asm("2") = (u64 __force) ioaddr;
++	register u64 len asm("3") = ulen;
++	u64 cnt = ulen;
++	int shift = ulen * 8;
++	int cc = -ENXIO;
++	u64 val, tmp;
++
++	/*
++	 * read 0 < @len <= 8 bytes from the PCI memory mapped at @ioaddr (in
++	 * user space) into a register using pcilg then store these bytes at
++	 * user address @dst
++	 */
++	asm volatile (
++		"       sacf    256\n"
++		"0:     .insn   rre,0xb9d60000,%[val],%[ioaddr]\n"
++		"1:     ipm     %[cc]\n"
++		"       srl     %[cc],28\n"
++		"       ltr     %[cc],%[cc]\n"
++		"       jne     4f\n"
++		"2:     ahi     %[shift],-8\n"
++		"       srlg    %[tmp],%[val],0(%[shift])\n"
++		"3:     stc     %[tmp],0(%[dst])\n"
++		"       aghi    %[dst],1\n"
++		"       brctg   %[cnt],2b\n"
++		"4:     sacf    768\n"
++		EX_TABLE(0b, 4b) EX_TABLE(1b, 4b) EX_TABLE(3b, 4b)
++		:
++		[cc] "+d" (cc), [val] "=d" (val), [len] "+d" (len),
++		[dst] "+a" (dst), [cnt] "+d" (cnt), [tmp] "=d" (tmp),
++		[shift] "+d" (shift)
++		:
++		[ioaddr] "a" (addr)
++		: "cc", "memory");
++
++	/* did we write everything to the user space buffer? */
++	if (!cc && cnt != 0)
++		cc = -EFAULT;
++
++	*status = len >> 24 & 0xff;
++	return cc;
++}
++
++static inline int __memcpy_fromio_inuser(void __user *dst,
++				     const void __iomem *src,
++				     unsigned long n)
++{
++	int size, rc = 0;
++	u8 status;
++	mm_segment_t old_fs;
++
++	old_fs = enable_sacf_uaccess();
++	while (n > 0) {
++		size = zpci_get_max_write_size((u64 __force) src,
++					       (u64 __force) dst, n,
++					       ZPCI_MAX_READ_SIZE);
++		rc = __pcilg_mio_inuser(dst, src, size, &status);
++		if (rc)
++			break;
++		src += size;
++		dst += size;
++		n -= size;
++	}
++	disable_sacf_uaccess(old_fs);
++	if (rc)
++		zpci_err_mmio(rc, status, (__force u64) dst);
++	return rc;
++}
++
+ SYSCALL_DEFINE3(s390_pci_mmio_read, unsigned long, mmio_addr,
+ 		void __user *, user_buffer, size_t, length)
+ {
+@@ -86,12 +280,27 @@ SYSCALL_DEFINE3(s390_pci_mmio_read, unsigned long, mmio_addr,
+ 
+ 	if (length <= 0 || PAGE_SIZE - (mmio_addr & ~PAGE_MASK) < length)
+ 		return -EINVAL;
++
++	/*
++	 * Only support write access to MIO capable devices on a MIO enabled
++	 * system. Otherwise we would have to check for every address if it is
++	 * a special ZPCI_ADDR and we would have to do a get_pfn() which we
++	 * don't need for MIO capable devices.
++	 */
++	if (static_branch_likely(&have_mio)) {
++		ret = __memcpy_fromio_inuser(
++				user_buffer, (const void __iomem *)mmio_addr,
++				length);
++		return ret;
++	}
++
+ 	if (length > 64) {
+ 		buf = kmalloc(length, GFP_KERNEL);
+ 		if (!buf)
+ 			return -ENOMEM;
+-	} else
++	} else {
+ 		buf = local_buf;
++	}
+ 
+ 	ret = get_pfn(mmio_addr, VM_READ, &pfn);
+ 	if (ret)
+diff --git a/arch/sh/include/uapi/asm/sockios.h b/arch/sh/include/uapi/asm/sockios.h
+index 3da561453260..ef01ced9e169 100644
+--- a/arch/sh/include/uapi/asm/sockios.h
++++ b/arch/sh/include/uapi/asm/sockios.h
+@@ -2,6 +2,8 @@
+ #ifndef __ASM_SH_SOCKIOS_H
+ #define __ASM_SH_SOCKIOS_H
+ 
++#include <linux/time_types.h>
++
+ /* Socket-level I/O control calls. */
+ #define FIOGETOWN	_IOR('f', 123, int)
+ #define FIOSETOWN 	_IOW('f', 124, int)
+diff --git a/arch/sparc/mm/srmmu.c b/arch/sparc/mm/srmmu.c
+index f56c3c9a9793..80061bc93bdc 100644
+--- a/arch/sparc/mm/srmmu.c
++++ b/arch/sparc/mm/srmmu.c
+@@ -331,9 +331,9 @@ static void __init srmmu_nocache_init(void)
+ 
+ 	while (vaddr < srmmu_nocache_end) {
+ 		pgd = pgd_offset_k(vaddr);
+-		p4d = p4d_offset(__nocache_fix(pgd), vaddr);
+-		pud = pud_offset(__nocache_fix(p4d), vaddr);
+-		pmd = pmd_offset(__nocache_fix(pgd), vaddr);
++		p4d = p4d_offset(pgd, vaddr);
++		pud = pud_offset(p4d, vaddr);
++		pmd = pmd_offset(__nocache_fix(pud), vaddr);
+ 		pte = pte_offset_kernel(__nocache_fix(pmd), vaddr);
+ 
+ 		pteval = ((paddr >> 4) | SRMMU_ET_PTE | SRMMU_PRIV);
+diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
+index beea77046f9b..0bc9a74468be 100644
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -70,6 +70,7 @@ config X86
+ 	select ARCH_HAS_KCOV			if X86_64
+ 	select ARCH_HAS_MEM_ENCRYPT
+ 	select ARCH_HAS_MEMBARRIER_SYNC_CORE
++	select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE
+ 	select ARCH_HAS_PMEM_API		if X86_64
+ 	select ARCH_HAS_PTE_DEVMAP		if X86_64
+ 	select ARCH_HAS_PTE_SPECIAL
+diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c
+index 5f973fed3c9f..e289722b04f6 100644
+--- a/arch/x86/kernel/apic/apic.c
++++ b/arch/x86/kernel/apic/apic.c
+@@ -352,8 +352,6 @@ static void __setup_APIC_LVTT(unsigned int clocks, int oneshot, int irqen)
+ 		 * According to Intel, MFENCE can do the serialization here.
+ 		 */
+ 		asm volatile("mfence" : : : "memory");
+-
+-		printk_once(KERN_DEBUG "TSC deadline timer enabled\n");
+ 		return;
+ 	}
+ 
+@@ -552,7 +550,7 @@ static DEFINE_PER_CPU(struct clock_event_device, lapic_events);
+ #define DEADLINE_MODEL_MATCH_REV(model, rev)	\
+ 	{ X86_VENDOR_INTEL, 6, model, X86_FEATURE_ANY, (unsigned long)rev }
+ 
+-static u32 hsx_deadline_rev(void)
++static __init u32 hsx_deadline_rev(void)
+ {
+ 	switch (boot_cpu_data.x86_stepping) {
+ 	case 0x02: return 0x3a; /* EP */
+@@ -562,7 +560,7 @@ static u32 hsx_deadline_rev(void)
+ 	return ~0U;
+ }
+ 
+-static u32 bdx_deadline_rev(void)
++static __init u32 bdx_deadline_rev(void)
+ {
+ 	switch (boot_cpu_data.x86_stepping) {
+ 	case 0x02: return 0x00000011;
+@@ -574,7 +572,7 @@ static u32 bdx_deadline_rev(void)
+ 	return ~0U;
+ }
+ 
+-static u32 skx_deadline_rev(void)
++static __init u32 skx_deadline_rev(void)
+ {
+ 	switch (boot_cpu_data.x86_stepping) {
+ 	case 0x03: return 0x01000136;
+@@ -587,7 +585,7 @@ static u32 skx_deadline_rev(void)
+ 	return ~0U;
+ }
+ 
+-static const struct x86_cpu_id deadline_match[] = {
++static const struct x86_cpu_id deadline_match[] __initconst = {
+ 	DEADLINE_MODEL_MATCH_FUNC( INTEL_FAM6_HASWELL_X,	hsx_deadline_rev),
+ 	DEADLINE_MODEL_MATCH_REV ( INTEL_FAM6_BROADWELL_X,	0x0b000020),
+ 	DEADLINE_MODEL_MATCH_FUNC( INTEL_FAM6_BROADWELL_D,	bdx_deadline_rev),
+@@ -609,18 +607,19 @@ static const struct x86_cpu_id deadline_match[] = {
+ 	{},
+ };
+ 
+-static void apic_check_deadline_errata(void)
++static __init bool apic_validate_deadline_timer(void)
+ {
+ 	const struct x86_cpu_id *m;
+ 	u32 rev;
+ 
+-	if (!boot_cpu_has(X86_FEATURE_TSC_DEADLINE_TIMER) ||
+-	    boot_cpu_has(X86_FEATURE_HYPERVISOR))
+-		return;
++	if (!boot_cpu_has(X86_FEATURE_TSC_DEADLINE_TIMER))
++		return false;
++	if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
++		return true;
+ 
+ 	m = x86_match_cpu(deadline_match);
+ 	if (!m)
+-		return;
++		return true;
+ 
+ 	/*
+ 	 * Function pointers will have the MSB set due to address layout,
+@@ -632,11 +631,12 @@ static void apic_check_deadline_errata(void)
+ 		rev = (u32)m->driver_data;
+ 
+ 	if (boot_cpu_data.microcode >= rev)
+-		return;
++		return true;
+ 
+ 	setup_clear_cpu_cap(X86_FEATURE_TSC_DEADLINE_TIMER);
+ 	pr_err(FW_BUG "TSC_DEADLINE disabled due to Errata; "
+ 	       "please update microcode to version: 0x%x (or later)\n", rev);
++	return false;
+ }
+ 
+ /*
+@@ -2098,7 +2098,8 @@ void __init init_apic_mappings(void)
+ {
+ 	unsigned int new_apicid;
+ 
+-	apic_check_deadline_errata();
++	if (apic_validate_deadline_timer())
++		pr_debug("TSC deadline timer available\n");
+ 
+ 	if (x2apic_mode) {
+ 		boot_cpu_physical_apicid = read_apic_id();
+diff --git a/arch/x86/kernel/unwind_orc.c b/arch/x86/kernel/unwind_orc.c
+index 9414f02a55ea..1a90abeca5f3 100644
+--- a/arch/x86/kernel/unwind_orc.c
++++ b/arch/x86/kernel/unwind_orc.c
+@@ -314,12 +314,19 @@ EXPORT_SYMBOL_GPL(unwind_get_return_address);
+ 
+ unsigned long *unwind_get_return_address_ptr(struct unwind_state *state)
+ {
++	struct task_struct *task = state->task;
++
+ 	if (unwind_done(state))
+ 		return NULL;
+ 
+ 	if (state->regs)
+ 		return &state->regs->ip;
+ 
++	if (task != current && state->sp == task->thread.sp) {
++		struct inactive_task_frame *frame = (void *)task->thread.sp;
++		return &frame->ret_addr;
++	}
++
+ 	if (state->sp)
+ 		return (unsigned long *)state->sp - 1;
+ 
+diff --git a/drivers/acpi/ec.c b/drivers/acpi/ec.c
+index 03b3067811c9..2713ddb3348c 100644
+--- a/drivers/acpi/ec.c
++++ b/drivers/acpi/ec.c
+@@ -2064,9 +2064,13 @@ bool acpi_ec_dispatch_gpe(void)
+ 	 * to allow the caller to process events properly after that.
+ 	 */
+ 	ret = acpi_dispatch_gpe(NULL, first_ec->gpe);
+-	if (ret == ACPI_INTERRUPT_HANDLED)
++	if (ret == ACPI_INTERRUPT_HANDLED) {
+ 		pm_pr_dbg("EC GPE dispatched\n");
+ 
++		/* Flush the event and query workqueues. */
++		acpi_ec_flush_work();
++	}
++
+ 	return false;
+ }
+ #endif /* CONFIG_PM_SLEEP */
+diff --git a/drivers/acpi/sleep.c b/drivers/acpi/sleep.c
+index 3850704570c0..fd9d4e8318e9 100644
+--- a/drivers/acpi/sleep.c
++++ b/drivers/acpi/sleep.c
+@@ -980,13 +980,6 @@ static int acpi_s2idle_prepare_late(void)
+ 	return 0;
+ }
+ 
+-static void acpi_s2idle_sync(void)
+-{
+-	/* The EC driver uses special workqueues that need to be flushed. */
+-	acpi_ec_flush_work();
+-	acpi_os_wait_events_complete(); /* synchronize Notify handling */
+-}
+-
+ static bool acpi_s2idle_wake(void)
+ {
+ 	if (!acpi_sci_irq_valid())
+@@ -1018,7 +1011,7 @@ static bool acpi_s2idle_wake(void)
+ 			return true;
+ 
+ 		/*
+-		 * Cancel the wakeup and process all pending events in case
++		 * Cancel the SCI wakeup and process all pending events in case
+ 		 * there are any wakeup ones in there.
+ 		 *
+ 		 * Note that if any non-EC GPEs are active at this point, the
+@@ -1026,8 +1019,7 @@ static bool acpi_s2idle_wake(void)
+ 		 * should be missed by canceling the wakeup here.
+ 		 */
+ 		pm_system_cancel_wakeup();
+-
+-		acpi_s2idle_sync();
++		acpi_os_wait_events_complete();
+ 
+ 		/*
+ 		 * The SCI is in the "suspended" state now and it cannot produce
+@@ -1060,7 +1052,8 @@ static void acpi_s2idle_restore(void)
+ 	 * of GPEs.
+ 	 */
+ 	acpi_os_wait_events_complete(); /* synchronize GPE processing */
+-	acpi_s2idle_sync();
++	acpi_ec_flush_work(); /* flush the EC driver's workqueues */
++	acpi_os_wait_events_complete(); /* synchronize Notify handling */
+ 
+ 	s2idle_wakeup = false;
+ 
+diff --git a/drivers/base/component.c b/drivers/base/component.c
+index c7879f5ae2fb..53b19daca750 100644
+--- a/drivers/base/component.c
++++ b/drivers/base/component.c
+@@ -256,7 +256,8 @@ static int try_to_bring_up_master(struct master *master,
+ 	ret = master->ops->bind(master->dev);
+ 	if (ret < 0) {
+ 		devres_release_group(master->dev, NULL);
+-		dev_info(master->dev, "master bind failed: %d\n", ret);
++		if (ret != -EPROBE_DEFER)
++			dev_info(master->dev, "master bind failed: %d\n", ret);
+ 		return ret;
+ 	}
+ 
+@@ -610,8 +611,9 @@ static int component_bind(struct component *component, struct master *master,
+ 		devres_release_group(component->dev, NULL);
+ 		devres_release_group(master->dev, NULL);
+ 
+-		dev_err(master->dev, "failed to bind %s (ops %ps): %d\n",
+-			dev_name(component->dev), component->ops, ret);
++		if (ret != -EPROBE_DEFER)
++			dev_err(master->dev, "failed to bind %s (ops %ps): %d\n",
++				dev_name(component->dev), component->ops, ret);
+ 	}
+ 
+ 	return ret;
+diff --git a/drivers/base/core.c b/drivers/base/core.c
+index d32a3aefff32..68277687c160 100644
+--- a/drivers/base/core.c
++++ b/drivers/base/core.c
+@@ -365,6 +365,7 @@ struct device_link *device_link_add(struct device *consumer,
+ 				link->flags |= DL_FLAG_STATELESS;
+ 				goto reorder;
+ 			} else {
++				link->flags |= DL_FLAG_STATELESS;
+ 				goto out;
+ 			}
+ 		}
+@@ -433,12 +434,16 @@ struct device_link *device_link_add(struct device *consumer,
+ 	    flags & DL_FLAG_PM_RUNTIME)
+ 		pm_runtime_resume(supplier);
+ 
++	list_add_tail_rcu(&link->s_node, &supplier->links.consumers);
++	list_add_tail_rcu(&link->c_node, &consumer->links.suppliers);
++
+ 	if (flags & DL_FLAG_SYNC_STATE_ONLY) {
+ 		dev_dbg(consumer,
+ 			"Linked as a sync state only consumer to %s\n",
+ 			dev_name(supplier));
+ 		goto out;
+ 	}
++
+ reorder:
+ 	/*
+ 	 * Move the consumer and all of the devices depending on it to the end
+@@ -449,12 +454,9 @@ reorder:
+ 	 */
+ 	device_reorder_to_tail(consumer, NULL);
+ 
+-	list_add_tail_rcu(&link->s_node, &supplier->links.consumers);
+-	list_add_tail_rcu(&link->c_node, &consumer->links.suppliers);
+-
+ 	dev_dbg(consumer, "Linked as a consumer to %s\n", dev_name(supplier));
+ 
+- out:
++out:
+ 	device_pm_unlock();
+ 	device_links_write_unlock();
+ 
+@@ -829,6 +831,13 @@ static void __device_links_supplier_defer_sync(struct device *sup)
+ 		list_add_tail(&sup->links.defer_sync, &deferred_sync);
+ }
+ 
++static void device_link_drop_managed(struct device_link *link)
++{
++	link->flags &= ~DL_FLAG_MANAGED;
++	WRITE_ONCE(link->status, DL_STATE_NONE);
++	kref_put(&link->kref, __device_link_del);
++}
++
+ /**
+  * device_links_driver_bound - Update device links after probing its driver.
+  * @dev: Device to update the links for.
+@@ -842,7 +851,7 @@ static void __device_links_supplier_defer_sync(struct device *sup)
+  */
+ void device_links_driver_bound(struct device *dev)
+ {
+-	struct device_link *link;
++	struct device_link *link, *ln;
+ 	LIST_HEAD(sync_list);
+ 
+ 	/*
+@@ -882,18 +891,35 @@ void device_links_driver_bound(struct device *dev)
+ 	else
+ 		__device_links_queue_sync_state(dev, &sync_list);
+ 
+-	list_for_each_entry(link, &dev->links.suppliers, c_node) {
++	list_for_each_entry_safe(link, ln, &dev->links.suppliers, c_node) {
++		struct device *supplier;
++
+ 		if (!(link->flags & DL_FLAG_MANAGED))
+ 			continue;
+ 
+-		WARN_ON(link->status != DL_STATE_CONSUMER_PROBE);
+-		WRITE_ONCE(link->status, DL_STATE_ACTIVE);
++		supplier = link->supplier;
++		if (link->flags & DL_FLAG_SYNC_STATE_ONLY) {
++			/*
++			 * When DL_FLAG_SYNC_STATE_ONLY is set, it means no
++			 * other DL_MANAGED_LINK_FLAGS have been set. So, it's
++			 * save to drop the managed link completely.
++			 */
++			device_link_drop_managed(link);
++		} else {
++			WARN_ON(link->status != DL_STATE_CONSUMER_PROBE);
++			WRITE_ONCE(link->status, DL_STATE_ACTIVE);
++		}
+ 
++		/*
++		 * This needs to be done even for the deleted
++		 * DL_FLAG_SYNC_STATE_ONLY device link in case it was the last
++		 * device link that was preventing the supplier from getting a
++		 * sync_state() call.
++		 */
+ 		if (defer_sync_state_count)
+-			__device_links_supplier_defer_sync(link->supplier);
++			__device_links_supplier_defer_sync(supplier);
+ 		else
+-			__device_links_queue_sync_state(link->supplier,
+-							&sync_list);
++			__device_links_queue_sync_state(supplier, &sync_list);
+ 	}
+ 
+ 	dev->links.status = DL_DEV_DRIVER_BOUND;
+@@ -903,13 +929,6 @@ void device_links_driver_bound(struct device *dev)
+ 	device_links_flush_sync_list(&sync_list, dev);
+ }
+ 
+-static void device_link_drop_managed(struct device_link *link)
+-{
+-	link->flags &= ~DL_FLAG_MANAGED;
+-	WRITE_ONCE(link->status, DL_STATE_NONE);
+-	kref_put(&link->kref, __device_link_del);
+-}
+-
+ /**
+  * __device_links_no_driver - Update links of a device without a driver.
+  * @dev: Device without a drvier.
+diff --git a/drivers/base/platform.c b/drivers/base/platform.c
+index c81b68d5d66d..b5ce7b085795 100644
+--- a/drivers/base/platform.c
++++ b/drivers/base/platform.c
+@@ -361,8 +361,6 @@ struct platform_object {
+  */
+ static void setup_pdev_dma_masks(struct platform_device *pdev)
+ {
+-	pdev->dev.dma_parms = &pdev->dma_parms;
+-
+ 	if (!pdev->dev.coherent_dma_mask)
+ 		pdev->dev.coherent_dma_mask = DMA_BIT_MASK(32);
+ 	if (!pdev->dev.dma_mask) {
+diff --git a/drivers/dax/kmem.c b/drivers/dax/kmem.c
+index 3d0a7e702c94..1e678bdf5aed 100644
+--- a/drivers/dax/kmem.c
++++ b/drivers/dax/kmem.c
+@@ -22,6 +22,7 @@ int dev_dax_kmem_probe(struct device *dev)
+ 	resource_size_t kmem_size;
+ 	resource_size_t kmem_end;
+ 	struct resource *new_res;
++	const char *new_res_name;
+ 	int numa_node;
+ 	int rc;
+ 
+@@ -48,11 +49,16 @@ int dev_dax_kmem_probe(struct device *dev)
+ 	kmem_size &= ~(memory_block_size_bytes() - 1);
+ 	kmem_end = kmem_start + kmem_size;
+ 
+-	/* Region is permanently reserved.  Hot-remove not yet implemented. */
+-	new_res = request_mem_region(kmem_start, kmem_size, dev_name(dev));
++	new_res_name = kstrdup(dev_name(dev), GFP_KERNEL);
++	if (!new_res_name)
++		return -ENOMEM;
++
++	/* Region is permanently reserved if hotremove fails. */
++	new_res = request_mem_region(kmem_start, kmem_size, new_res_name);
+ 	if (!new_res) {
+ 		dev_warn(dev, "could not reserve region [%pa-%pa]\n",
+ 			 &kmem_start, &kmem_end);
++		kfree(new_res_name);
+ 		return -EBUSY;
+ 	}
+ 
+@@ -63,12 +69,12 @@ int dev_dax_kmem_probe(struct device *dev)
+ 	 * unknown to us that will break add_memory() below.
+ 	 */
+ 	new_res->flags = IORESOURCE_SYSTEM_RAM;
+-	new_res->name = dev_name(dev);
+ 
+ 	rc = add_memory(numa_node, new_res->start, resource_size(new_res));
+ 	if (rc) {
+ 		release_resource(new_res);
+ 		kfree(new_res);
++		kfree(new_res_name);
+ 		return rc;
+ 	}
+ 	dev_dax->dax_kmem_res = new_res;
+@@ -83,6 +89,7 @@ static int dev_dax_kmem_remove(struct device *dev)
+ 	struct resource *res = dev_dax->dax_kmem_res;
+ 	resource_size_t kmem_start = res->start;
+ 	resource_size_t kmem_size = resource_size(res);
++	const char *res_name = res->name;
+ 	int rc;
+ 
+ 	/*
+@@ -102,6 +109,7 @@ static int dev_dax_kmem_remove(struct device *dev)
+ 	/* Release and free dax resources */
+ 	release_resource(res);
+ 	kfree(res);
++	kfree(res_name);
+ 	dev_dax->dax_kmem_res = NULL;
+ 
+ 	return 0;
+diff --git a/drivers/dma/dmatest.c b/drivers/dma/dmatest.c
+index 364dd34799d4..0425984db118 100644
+--- a/drivers/dma/dmatest.c
++++ b/drivers/dma/dmatest.c
+@@ -1166,10 +1166,11 @@ static int dmatest_run_set(const char *val, const struct kernel_param *kp)
+ 		mutex_unlock(&info->lock);
+ 		return ret;
+ 	} else if (dmatest_run) {
+-		if (is_threaded_test_pending(info))
+-			start_threaded_tests(info);
+-		else
+-			pr_info("Could not start test, no channels configured\n");
++		if (!is_threaded_test_pending(info)) {
++			pr_info("No channels configured, continue with any\n");
++			add_threaded_test(info);
++		}
++		start_threaded_tests(info);
+ 	} else {
+ 		stop_threaded_test(info);
+ 	}
+diff --git a/drivers/dma/idxd/device.c b/drivers/dma/idxd/device.c
+index f6f49f0f6fae..8d79a8787104 100644
+--- a/drivers/dma/idxd/device.c
++++ b/drivers/dma/idxd/device.c
+@@ -62,6 +62,13 @@ int idxd_unmask_msix_vector(struct idxd_device *idxd, int vec_id)
+ 	perm.ignore = 0;
+ 	iowrite32(perm.bits, idxd->reg_base + offset);
+ 
++	/*
++	 * A readback from the device ensures that any previously generated
++	 * completion record writes are visible to software based on PCI
++	 * ordering rules.
++	 */
++	perm.bits = ioread32(idxd->reg_base + offset);
++
+ 	return 0;
+ }
+ 
+diff --git a/drivers/dma/idxd/irq.c b/drivers/dma/idxd/irq.c
+index d6fcd2e60103..6510791b9921 100644
+--- a/drivers/dma/idxd/irq.c
++++ b/drivers/dma/idxd/irq.c
+@@ -173,6 +173,7 @@ static int irq_process_pending_llist(struct idxd_irq_entry *irq_entry,
+ 	struct llist_node *head;
+ 	int queued = 0;
+ 
++	*processed = 0;
+ 	head = llist_del_all(&irq_entry->pending_llist);
+ 	if (!head)
+ 		return 0;
+@@ -197,6 +198,7 @@ static int irq_process_work_list(struct idxd_irq_entry *irq_entry,
+ 	struct list_head *node, *next;
+ 	int queued = 0;
+ 
++	*processed = 0;
+ 	if (list_empty(&irq_entry->work_list))
+ 		return 0;
+ 
+@@ -218,10 +220,9 @@ static int irq_process_work_list(struct idxd_irq_entry *irq_entry,
+ 	return queued;
+ }
+ 
+-irqreturn_t idxd_wq_thread(int irq, void *data)
++static int idxd_desc_process(struct idxd_irq_entry *irq_entry)
+ {
+-	struct idxd_irq_entry *irq_entry = data;
+-	int rc, processed = 0, retry = 0;
++	int rc, processed, total = 0;
+ 
+ 	/*
+ 	 * There are two lists we are processing. The pending_llist is where
+@@ -244,15 +245,26 @@ irqreturn_t idxd_wq_thread(int irq, void *data)
+ 	 */
+ 	do {
+ 		rc = irq_process_work_list(irq_entry, &processed);
+-		if (rc != 0) {
+-			retry++;
++		total += processed;
++		if (rc != 0)
+ 			continue;
+-		}
+ 
+ 		rc = irq_process_pending_llist(irq_entry, &processed);
+-	} while (rc != 0 && retry != 10);
++		total += processed;
++	} while (rc != 0);
++
++	return total;
++}
++
++irqreturn_t idxd_wq_thread(int irq, void *data)
++{
++	struct idxd_irq_entry *irq_entry = data;
++	int processed;
+ 
++	processed = idxd_desc_process(irq_entry);
+ 	idxd_unmask_msix_vector(irq_entry->idxd, irq_entry->id);
++	/* catch anything unprocessed after unmasking */
++	processed += idxd_desc_process(irq_entry);
+ 
+ 	if (processed == 0)
+ 		return IRQ_NONE;
+diff --git a/drivers/dma/owl-dma.c b/drivers/dma/owl-dma.c
+index c683051257fd..66ef70b00ec0 100644
+--- a/drivers/dma/owl-dma.c
++++ b/drivers/dma/owl-dma.c
+@@ -175,13 +175,11 @@ struct owl_dma_txd {
+  * @id: physical index to this channel
+  * @base: virtual memory base for the dma channel
+  * @vchan: the virtual channel currently being served by this physical channel
+- * @lock: a lock to use when altering an instance of this struct
+  */
+ struct owl_dma_pchan {
+ 	u32			id;
+ 	void __iomem		*base;
+ 	struct owl_dma_vchan	*vchan;
+-	spinlock_t		lock;
+ };
+ 
+ /**
+@@ -437,14 +435,14 @@ static struct owl_dma_pchan *owl_dma_get_pchan(struct owl_dma *od,
+ 	for (i = 0; i < od->nr_pchans; i++) {
+ 		pchan = &od->pchans[i];
+ 
+-		spin_lock_irqsave(&pchan->lock, flags);
++		spin_lock_irqsave(&od->lock, flags);
+ 		if (!pchan->vchan) {
+ 			pchan->vchan = vchan;
+-			spin_unlock_irqrestore(&pchan->lock, flags);
++			spin_unlock_irqrestore(&od->lock, flags);
+ 			break;
+ 		}
+ 
+-		spin_unlock_irqrestore(&pchan->lock, flags);
++		spin_unlock_irqrestore(&od->lock, flags);
+ 	}
+ 
+ 	return pchan;
+diff --git a/drivers/dma/tegra210-adma.c b/drivers/dma/tegra210-adma.c
+index 6e1268552f74..914901a680c8 100644
+--- a/drivers/dma/tegra210-adma.c
++++ b/drivers/dma/tegra210-adma.c
+@@ -900,7 +900,7 @@ static int tegra_adma_probe(struct platform_device *pdev)
+ 	ret = dma_async_device_register(&tdma->dma_dev);
+ 	if (ret < 0) {
+ 		dev_err(&pdev->dev, "ADMA registration failed: %d\n", ret);
+-		goto irq_dispose;
++		goto rpm_put;
+ 	}
+ 
+ 	ret = of_dma_controller_register(pdev->dev.of_node,
+diff --git a/drivers/firmware/efi/libstub/tpm.c b/drivers/firmware/efi/libstub/tpm.c
+index 1d59e103a2e3..e9a684637b70 100644
+--- a/drivers/firmware/efi/libstub/tpm.c
++++ b/drivers/firmware/efi/libstub/tpm.c
+@@ -54,7 +54,7 @@ void efi_retrieve_tpm2_eventlog(void)
+ 	efi_status_t status;
+ 	efi_physical_addr_t log_location = 0, log_last_entry = 0;
+ 	struct linux_efi_tpm_eventlog *log_tbl = NULL;
+-	struct efi_tcg2_final_events_table *final_events_table;
++	struct efi_tcg2_final_events_table *final_events_table = NULL;
+ 	unsigned long first_entry_addr, last_entry_addr;
+ 	size_t log_size, last_entry_size;
+ 	efi_bool_t truncated;
+@@ -127,7 +127,8 @@ void efi_retrieve_tpm2_eventlog(void)
+ 	 * Figure out whether any events have already been logged to the
+ 	 * final events structure, and if so how much space they take up
+ 	 */
+-	final_events_table = get_efi_config_table(LINUX_EFI_TPM_FINAL_LOG_GUID);
++	if (version == EFI_TCG2_EVENT_LOG_FORMAT_TCG_2)
++		final_events_table = get_efi_config_table(LINUX_EFI_TPM_FINAL_LOG_GUID);
+ 	if (final_events_table && final_events_table->nr_events) {
+ 		struct tcg_pcr_event2_head *header;
+ 		int offset;
+diff --git a/drivers/firmware/efi/tpm.c b/drivers/firmware/efi/tpm.c
+index 55b031d2c989..c1955d320fec 100644
+--- a/drivers/firmware/efi/tpm.c
++++ b/drivers/firmware/efi/tpm.c
+@@ -62,8 +62,11 @@ int __init efi_tpm_eventlog_init(void)
+ 	tbl_size = sizeof(*log_tbl) + log_tbl->size;
+ 	memblock_reserve(efi.tpm_log, tbl_size);
+ 
+-	if (efi.tpm_final_log == EFI_INVALID_TABLE_ADDR)
++	if (efi.tpm_final_log == EFI_INVALID_TABLE_ADDR ||
++	    log_tbl->version != EFI_TCG2_EVENT_LOG_FORMAT_TCG_2) {
++		pr_warn(FW_BUG "TPM Final Events table missing or invalid\n");
+ 		goto out;
++	}
+ 
+ 	final_tbl = early_memremap(efi.tpm_final_log, sizeof(*final_tbl));
+ 
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 5e27a67fbc58..0cd11d3d4cf4 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -1941,17 +1941,22 @@ amdgpu_dm_update_connector_after_detect(struct amdgpu_dm_connector *aconnector)
+ 		dc_sink_retain(aconnector->dc_sink);
+ 		if (sink->dc_edid.length == 0) {
+ 			aconnector->edid = NULL;
+-			drm_dp_cec_unset_edid(&aconnector->dm_dp_aux.aux);
++			if (aconnector->dc_link->aux_mode) {
++				drm_dp_cec_unset_edid(
++					&aconnector->dm_dp_aux.aux);
++			}
+ 		} else {
+ 			aconnector->edid =
+-				(struct edid *) sink->dc_edid.raw_edid;
+-
++				(struct edid *)sink->dc_edid.raw_edid;
+ 
+ 			drm_connector_update_edid_property(connector,
+-					aconnector->edid);
+-			drm_dp_cec_set_edid(&aconnector->dm_dp_aux.aux,
+-					    aconnector->edid);
++							   aconnector->edid);
++
++			if (aconnector->dc_link->aux_mode)
++				drm_dp_cec_set_edid(&aconnector->dm_dp_aux.aux,
++						    aconnector->edid);
+ 		}
++
+ 		amdgpu_dm_update_freesync_caps(connector, aconnector->edid);
+ 
+ 	} else {
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index 188e51600070..b3987124183a 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -803,11 +803,10 @@ static void disable_dangling_plane(struct dc *dc, struct dc_state *context)
+ static void wait_for_no_pipes_pending(struct dc *dc, struct dc_state *context)
+ {
+ 	int i;
+-	int count = 0;
+-	struct pipe_ctx *pipe;
+ 	PERF_TRACE();
+ 	for (i = 0; i < MAX_PIPES; i++) {
+-		pipe = &context->res_ctx.pipe_ctx[i];
++		int count = 0;
++		struct pipe_ctx *pipe = &context->res_ctx.pipe_ctx[i];
+ 
+ 		if (!pipe->plane_state)
+ 			continue;
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
+index 3b0afa156d92..54def341c1db 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
+@@ -238,8 +238,10 @@ static int submit_pin_objects(struct etnaviv_gem_submit *submit)
+ 		}
+ 
+ 		if ((submit->flags & ETNA_SUBMIT_SOFTPIN) &&
+-		     submit->bos[i].va != mapping->iova)
++		     submit->bos[i].va != mapping->iova) {
++			etnaviv_gem_mapping_unreference(mapping);
+ 			return -EINVAL;
++		}
+ 
+ 		atomic_inc(&etnaviv_obj->gpu_active);
+ 
+diff --git a/drivers/gpu/drm/etnaviv/etnaviv_perfmon.c b/drivers/gpu/drm/etnaviv/etnaviv_perfmon.c
+index e6795bafcbb9..75f9db8f7bec 100644
+--- a/drivers/gpu/drm/etnaviv/etnaviv_perfmon.c
++++ b/drivers/gpu/drm/etnaviv/etnaviv_perfmon.c
+@@ -453,7 +453,7 @@ static const struct etnaviv_pm_domain *pm_domain(const struct etnaviv_gpu *gpu,
+ 		if (!(gpu->identity.features & meta->feature))
+ 			continue;
+ 
+-		if (meta->nr_domains < (index - offset)) {
++		if (index - offset >= meta->nr_domains) {
+ 			offset += meta->nr_domains;
+ 			continue;
+ 		}
+diff --git a/drivers/gpu/drm/i915/gvt/display.c b/drivers/gpu/drm/i915/gvt/display.c
+index a62bdf9be682..59aa5e64acb0 100644
+--- a/drivers/gpu/drm/i915/gvt/display.c
++++ b/drivers/gpu/drm/i915/gvt/display.c
+@@ -207,14 +207,41 @@ static void emulate_monitor_status_change(struct intel_vgpu *vgpu)
+ 				SKL_FUSE_PG_DIST_STATUS(SKL_PG0) |
+ 				SKL_FUSE_PG_DIST_STATUS(SKL_PG1) |
+ 				SKL_FUSE_PG_DIST_STATUS(SKL_PG2);
+-		vgpu_vreg_t(vgpu, LCPLL1_CTL) |=
+-				LCPLL_PLL_ENABLE |
+-				LCPLL_PLL_LOCK;
+-		vgpu_vreg_t(vgpu, LCPLL2_CTL) |= LCPLL_PLL_ENABLE;
+-
++		/*
++		 * Only 1 PIPE enabled in current vGPU display and PIPE_A is
++		 *  tied to TRANSCODER_A in HW, so it's safe to assume PIPE_A,
++		 *   TRANSCODER_A can be enabled. PORT_x depends on the input of
++		 *   setup_virtual_dp_monitor, we can bind DPLL0 to any PORT_x
++		 *   so we fixed to DPLL0 here.
++		 * Setup DPLL0: DP link clk 1620 MHz, non SSC, DP Mode
++		 */
++		vgpu_vreg_t(vgpu, DPLL_CTRL1) =
++			DPLL_CTRL1_OVERRIDE(DPLL_ID_SKL_DPLL0);
++		vgpu_vreg_t(vgpu, DPLL_CTRL1) |=
++			DPLL_CTRL1_LINK_RATE(DPLL_CTRL1_LINK_RATE_1620, DPLL_ID_SKL_DPLL0);
++		vgpu_vreg_t(vgpu, LCPLL1_CTL) =
++			LCPLL_PLL_ENABLE | LCPLL_PLL_LOCK;
++		vgpu_vreg_t(vgpu, DPLL_STATUS) = DPLL_LOCK(DPLL_ID_SKL_DPLL0);
++		/*
++		 * Golden M/N are calculated based on:
++		 *   24 bpp, 4 lanes, 154000 pixel clk (from virtual EDID),
++		 *   DP link clk 1620 MHz and non-constant_n.
++		 * TODO: calculate DP link symbol clk and stream clk m/n.
++		 */
++		vgpu_vreg_t(vgpu, PIPE_DATA_M1(TRANSCODER_A)) = 63 << TU_SIZE_SHIFT;
++		vgpu_vreg_t(vgpu, PIPE_DATA_M1(TRANSCODER_A)) |= 0x5b425e;
++		vgpu_vreg_t(vgpu, PIPE_DATA_N1(TRANSCODER_A)) = 0x800000;
++		vgpu_vreg_t(vgpu, PIPE_LINK_M1(TRANSCODER_A)) = 0x3cd6e;
++		vgpu_vreg_t(vgpu, PIPE_LINK_N1(TRANSCODER_A)) = 0x80000;
+ 	}
+ 
+ 	if (intel_vgpu_has_monitor_on_port(vgpu, PORT_B)) {
++		vgpu_vreg_t(vgpu, DPLL_CTRL2) &=
++			~DPLL_CTRL2_DDI_CLK_OFF(PORT_B);
++		vgpu_vreg_t(vgpu, DPLL_CTRL2) |=
++			DPLL_CTRL2_DDI_CLK_SEL(DPLL_ID_SKL_DPLL0, PORT_B);
++		vgpu_vreg_t(vgpu, DPLL_CTRL2) |=
++			DPLL_CTRL2_DDI_SEL_OVERRIDE(PORT_B);
+ 		vgpu_vreg_t(vgpu, SFUSE_STRAP) |= SFUSE_STRAP_DDIB_DETECTED;
+ 		vgpu_vreg_t(vgpu, TRANS_DDI_FUNC_CTL(TRANSCODER_A)) &=
+ 			~(TRANS_DDI_BPC_MASK | TRANS_DDI_MODE_SELECT_MASK |
+@@ -235,6 +262,12 @@ static void emulate_monitor_status_change(struct intel_vgpu *vgpu)
+ 	}
+ 
+ 	if (intel_vgpu_has_monitor_on_port(vgpu, PORT_C)) {
++		vgpu_vreg_t(vgpu, DPLL_CTRL2) &=
++			~DPLL_CTRL2_DDI_CLK_OFF(PORT_C);
++		vgpu_vreg_t(vgpu, DPLL_CTRL2) |=
++			DPLL_CTRL2_DDI_CLK_SEL(DPLL_ID_SKL_DPLL0, PORT_C);
++		vgpu_vreg_t(vgpu, DPLL_CTRL2) |=
++			DPLL_CTRL2_DDI_SEL_OVERRIDE(PORT_C);
+ 		vgpu_vreg_t(vgpu, SDEISR) |= SDE_PORTC_HOTPLUG_CPT;
+ 		vgpu_vreg_t(vgpu, TRANS_DDI_FUNC_CTL(TRANSCODER_A)) &=
+ 			~(TRANS_DDI_BPC_MASK | TRANS_DDI_MODE_SELECT_MASK |
+@@ -255,6 +288,12 @@ static void emulate_monitor_status_change(struct intel_vgpu *vgpu)
+ 	}
+ 
+ 	if (intel_vgpu_has_monitor_on_port(vgpu, PORT_D)) {
++		vgpu_vreg_t(vgpu, DPLL_CTRL2) &=
++			~DPLL_CTRL2_DDI_CLK_OFF(PORT_D);
++		vgpu_vreg_t(vgpu, DPLL_CTRL2) |=
++			DPLL_CTRL2_DDI_CLK_SEL(DPLL_ID_SKL_DPLL0, PORT_D);
++		vgpu_vreg_t(vgpu, DPLL_CTRL2) |=
++			DPLL_CTRL2_DDI_SEL_OVERRIDE(PORT_D);
+ 		vgpu_vreg_t(vgpu, SDEISR) |= SDE_PORTD_HOTPLUG_CPT;
+ 		vgpu_vreg_t(vgpu, TRANS_DDI_FUNC_CTL(TRANSCODER_A)) &=
+ 			~(TRANS_DDI_BPC_MASK | TRANS_DDI_MODE_SELECT_MASK |
+diff --git a/drivers/gpu/drm/i915/i915_request.c b/drivers/gpu/drm/i915/i915_request.c
+index 32ab154db788..1f50fc8bcebf 100644
+--- a/drivers/gpu/drm/i915/i915_request.c
++++ b/drivers/gpu/drm/i915/i915_request.c
+@@ -947,8 +947,10 @@ i915_request_await_request(struct i915_request *to, struct i915_request *from)
+ 	GEM_BUG_ON(to == from);
+ 	GEM_BUG_ON(to->timeline == from->timeline);
+ 
+-	if (i915_request_completed(from))
++	if (i915_request_completed(from)) {
++		i915_sw_fence_set_error_once(&to->submit, from->fence.error);
+ 		return 0;
++	}
+ 
+ 	if (to->engine->schedule) {
+ 		ret = i915_sched_node_add_dependency(&to->sched,
+diff --git a/drivers/hid/hid-alps.c b/drivers/hid/hid-alps.c
+index fa704153cb00..b2ad319a74b9 100644
+--- a/drivers/hid/hid-alps.c
++++ b/drivers/hid/hid-alps.c
+@@ -802,6 +802,7 @@ static int alps_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ 		break;
+ 	case HID_DEVICE_ID_ALPS_U1_DUAL:
+ 	case HID_DEVICE_ID_ALPS_U1:
++	case HID_DEVICE_ID_ALPS_U1_UNICORN_LEGACY:
+ 		data->dev_type = U1;
+ 		break;
+ 	default:
+diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
+index 9f2213426556..b1d6156ebf9d 100644
+--- a/drivers/hid/hid-ids.h
++++ b/drivers/hid/hid-ids.h
+@@ -79,10 +79,10 @@
+ #define HID_DEVICE_ID_ALPS_U1_DUAL_PTP	0x121F
+ #define HID_DEVICE_ID_ALPS_U1_DUAL_3BTN_PTP	0x1220
+ #define HID_DEVICE_ID_ALPS_U1		0x1215
++#define HID_DEVICE_ID_ALPS_U1_UNICORN_LEGACY         0x121E
+ #define HID_DEVICE_ID_ALPS_T4_BTNLESS	0x120C
+ #define HID_DEVICE_ID_ALPS_1222		0x1222
+ 
+-
+ #define USB_VENDOR_ID_AMI		0x046b
+ #define USB_DEVICE_ID_AMI_VIRT_KEYBOARD_AND_MOUSE	0xff10
+ 
+@@ -385,6 +385,7 @@
+ #define USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_7349	0x7349
+ #define USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_73F7	0x73f7
+ #define USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_A001	0xa001
++#define USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_C002	0xc002
+ 
+ #define USB_VENDOR_ID_ELAN		0x04f3
+ #define USB_DEVICE_ID_TOSHIBA_CLICK_L9W	0x0401
+@@ -755,6 +756,7 @@
+ #define USB_DEVICE_ID_LOGITECH_RUMBLEPAD2	0xc218
+ #define USB_DEVICE_ID_LOGITECH_RUMBLEPAD2_2	0xc219
+ #define USB_DEVICE_ID_LOGITECH_G15_LCD		0xc222
++#define USB_DEVICE_ID_LOGITECH_G11		0xc225
+ #define USB_DEVICE_ID_LOGITECH_G15_V2_LCD	0xc227
+ #define USB_DEVICE_ID_LOGITECH_G510		0xc22d
+ #define USB_DEVICE_ID_LOGITECH_G510_USB_AUDIO	0xc22e
+@@ -1092,6 +1094,9 @@
+ #define USB_DEVICE_ID_SYMBOL_SCANNER_2	0x1300
+ #define USB_DEVICE_ID_SYMBOL_SCANNER_3	0x1200
+ 
++#define I2C_VENDOR_ID_SYNAPTICS     0x06cb
++#define I2C_PRODUCT_ID_SYNAPTICS_SYNA2393   0x7a13
++
+ #define USB_VENDOR_ID_SYNAPTICS		0x06cb
+ #define USB_DEVICE_ID_SYNAPTICS_TP	0x0001
+ #define USB_DEVICE_ID_SYNAPTICS_INT_TP	0x0002
+@@ -1106,6 +1111,7 @@
+ #define USB_DEVICE_ID_SYNAPTICS_LTS2	0x1d10
+ #define USB_DEVICE_ID_SYNAPTICS_HD	0x0ac3
+ #define USB_DEVICE_ID_SYNAPTICS_QUAD_HD	0x1ac3
++#define USB_DEVICE_ID_SYNAPTICS_DELL_K12A	0x2819
+ #define USB_DEVICE_ID_SYNAPTICS_ACER_SWITCH5_012	0x2968
+ #define USB_DEVICE_ID_SYNAPTICS_TP_V103	0x5710
+ #define USB_DEVICE_ID_SYNAPTICS_ACER_SWITCH5	0x81a7
+diff --git a/drivers/hid/hid-lg-g15.c b/drivers/hid/hid-lg-g15.c
+index ad4b5412a9f4..ef0cbcd7540d 100644
+--- a/drivers/hid/hid-lg-g15.c
++++ b/drivers/hid/hid-lg-g15.c
+@@ -872,6 +872,10 @@ error_hw_stop:
+ }
+ 
+ static const struct hid_device_id lg_g15_devices[] = {
++	/* The G11 is a G15 without the LCD, treat it as a G15 */
++	{ HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH,
++		USB_DEVICE_ID_LOGITECH_G11),
++		.driver_data = LG_G15 },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_LOGITECH,
+ 			 USB_DEVICE_ID_LOGITECH_G15_LCD),
+ 		.driver_data = LG_G15 },
+diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c
+index 362805ddf377..03c720b47306 100644
+--- a/drivers/hid/hid-multitouch.c
++++ b/drivers/hid/hid-multitouch.c
+@@ -1922,6 +1922,9 @@ static const struct hid_device_id mt_devices[] = {
+ 	{ .driver_data = MT_CLS_EGALAX_SERIAL,
+ 		MT_USB_DEVICE(USB_VENDOR_ID_DWAV,
+ 			USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_A001) },
++	{ .driver_data = MT_CLS_EGALAX,
++		MT_USB_DEVICE(USB_VENDOR_ID_DWAV,
++			USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_C002) },
+ 
+ 	/* Elitegroup panel */
+ 	{ .driver_data = MT_CLS_SERIAL,
+diff --git a/drivers/hid/hid-quirks.c b/drivers/hid/hid-quirks.c
+index 3735546bb524..acc7c14f7fbc 100644
+--- a/drivers/hid/hid-quirks.c
++++ b/drivers/hid/hid-quirks.c
+@@ -163,6 +163,7 @@ static const struct hid_device_id hid_quirks[] = {
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_SYNAPTICS, USB_DEVICE_ID_SYNAPTICS_LTS2), HID_QUIRK_NO_INIT_REPORTS },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_SYNAPTICS, USB_DEVICE_ID_SYNAPTICS_QUAD_HD), HID_QUIRK_NO_INIT_REPORTS },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_SYNAPTICS, USB_DEVICE_ID_SYNAPTICS_TP_V103), HID_QUIRK_NO_INIT_REPORTS },
++	{ HID_USB_DEVICE(USB_VENDOR_ID_SYNAPTICS, USB_DEVICE_ID_SYNAPTICS_DELL_K12A), HID_QUIRK_NO_INIT_REPORTS },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_TOPMAX, USB_DEVICE_ID_TOPMAX_COBRAPAD), HID_QUIRK_BADPAD },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_TOUCHPACK, USB_DEVICE_ID_TOUCHPACK_RTS), HID_QUIRK_MULTI_INPUT },
+ 	{ HID_USB_DEVICE(USB_VENDOR_ID_TPV, USB_DEVICE_ID_TPV_OPTICAL_TOUCHSCREEN_8882), HID_QUIRK_NOGET },
+diff --git a/drivers/hid/i2c-hid/i2c-hid-core.c b/drivers/hid/i2c-hid/i2c-hid-core.c
+index 009000c5d55c..294c84e136d7 100644
+--- a/drivers/hid/i2c-hid/i2c-hid-core.c
++++ b/drivers/hid/i2c-hid/i2c-hid-core.c
+@@ -177,6 +177,8 @@ static const struct i2c_hid_quirks {
+ 		 I2C_HID_QUIRK_BOGUS_IRQ },
+ 	{ USB_VENDOR_ID_ALPS_JP, HID_ANY_ID,
+ 		 I2C_HID_QUIRK_RESET_ON_RESUME },
++	{ I2C_VENDOR_ID_SYNAPTICS, I2C_PRODUCT_ID_SYNAPTICS_SYNA2393,
++		 I2C_HID_QUIRK_RESET_ON_RESUME },
+ 	{ USB_VENDOR_ID_ITE, I2C_DEVICE_ID_ITE_LENOVO_LEGION_Y720,
+ 		I2C_HID_QUIRK_BAD_INPUT_SIZE },
+ 	{ 0, 0 }
+diff --git a/drivers/i2c/i2c-core-base.c b/drivers/i2c/i2c-core-base.c
+index cefad0881942..fd3199782b6e 100644
+--- a/drivers/i2c/i2c-core-base.c
++++ b/drivers/i2c/i2c-core-base.c
+@@ -338,8 +338,10 @@ static int i2c_device_probe(struct device *dev)
+ 		} else if (ACPI_COMPANION(dev)) {
+ 			irq = i2c_acpi_get_irq(client);
+ 		}
+-		if (irq == -EPROBE_DEFER)
+-			return irq;
++		if (irq == -EPROBE_DEFER) {
++			status = irq;
++			goto put_sync_adapter;
++		}
+ 
+ 		if (irq < 0)
+ 			irq = 0;
+@@ -353,15 +355,19 @@ static int i2c_device_probe(struct device *dev)
+ 	 */
+ 	if (!driver->id_table &&
+ 	    !i2c_acpi_match_device(dev->driver->acpi_match_table, client) &&
+-	    !i2c_of_match_device(dev->driver->of_match_table, client))
+-		return -ENODEV;
++	    !i2c_of_match_device(dev->driver->of_match_table, client)) {
++		status = -ENODEV;
++		goto put_sync_adapter;
++	}
+ 
+ 	if (client->flags & I2C_CLIENT_WAKE) {
+ 		int wakeirq;
+ 
+ 		wakeirq = of_irq_get_byname(dev->of_node, "wakeup");
+-		if (wakeirq == -EPROBE_DEFER)
+-			return wakeirq;
++		if (wakeirq == -EPROBE_DEFER) {
++			status = wakeirq;
++			goto put_sync_adapter;
++		}
+ 
+ 		device_init_wakeup(&client->dev, true);
+ 
+@@ -408,6 +414,10 @@ err_detach_pm_domain:
+ err_clear_wakeup_irq:
+ 	dev_pm_clear_wake_irq(&client->dev);
+ 	device_init_wakeup(&client->dev, false);
++put_sync_adapter:
++	if (client->flags & I2C_CLIENT_HOST_NOTIFY)
++		pm_runtime_put_sync(&client->adapter->dev);
++
+ 	return status;
+ }
+ 
+diff --git a/drivers/i2c/i2c-dev.c b/drivers/i2c/i2c-dev.c
+index 2ea4585d18c5..94beacc41302 100644
+--- a/drivers/i2c/i2c-dev.c
++++ b/drivers/i2c/i2c-dev.c
+@@ -40,7 +40,7 @@
+ struct i2c_dev {
+ 	struct list_head list;
+ 	struct i2c_adapter *adap;
+-	struct device *dev;
++	struct device dev;
+ 	struct cdev cdev;
+ };
+ 
+@@ -84,12 +84,14 @@ static struct i2c_dev *get_free_i2c_dev(struct i2c_adapter *adap)
+ 	return i2c_dev;
+ }
+ 
+-static void put_i2c_dev(struct i2c_dev *i2c_dev)
++static void put_i2c_dev(struct i2c_dev *i2c_dev, bool del_cdev)
+ {
+ 	spin_lock(&i2c_dev_list_lock);
+ 	list_del(&i2c_dev->list);
+ 	spin_unlock(&i2c_dev_list_lock);
+-	kfree(i2c_dev);
++	if (del_cdev)
++		cdev_device_del(&i2c_dev->cdev, &i2c_dev->dev);
++	put_device(&i2c_dev->dev);
+ }
+ 
+ static ssize_t name_show(struct device *dev,
+@@ -628,6 +630,14 @@ static const struct file_operations i2cdev_fops = {
+ 
+ static struct class *i2c_dev_class;
+ 
++static void i2cdev_dev_release(struct device *dev)
++{
++	struct i2c_dev *i2c_dev;
++
++	i2c_dev = container_of(dev, struct i2c_dev, dev);
++	kfree(i2c_dev);
++}
++
+ static int i2cdev_attach_adapter(struct device *dev, void *dummy)
+ {
+ 	struct i2c_adapter *adap;
+@@ -644,27 +654,23 @@ static int i2cdev_attach_adapter(struct device *dev, void *dummy)
+ 
+ 	cdev_init(&i2c_dev->cdev, &i2cdev_fops);
+ 	i2c_dev->cdev.owner = THIS_MODULE;
+-	res = cdev_add(&i2c_dev->cdev, MKDEV(I2C_MAJOR, adap->nr), 1);
+-	if (res)
+-		goto error_cdev;
+-
+-	/* register this i2c device with the driver core */
+-	i2c_dev->dev = device_create(i2c_dev_class, &adap->dev,
+-				     MKDEV(I2C_MAJOR, adap->nr), NULL,
+-				     "i2c-%d", adap->nr);
+-	if (IS_ERR(i2c_dev->dev)) {
+-		res = PTR_ERR(i2c_dev->dev);
+-		goto error;
++
++	device_initialize(&i2c_dev->dev);
++	i2c_dev->dev.devt = MKDEV(I2C_MAJOR, adap->nr);
++	i2c_dev->dev.class = i2c_dev_class;
++	i2c_dev->dev.parent = &adap->dev;
++	i2c_dev->dev.release = i2cdev_dev_release;
++	dev_set_name(&i2c_dev->dev, "i2c-%d", adap->nr);
++
++	res = cdev_device_add(&i2c_dev->cdev, &i2c_dev->dev);
++	if (res) {
++		put_i2c_dev(i2c_dev, false);
++		return res;
+ 	}
+ 
+ 	pr_debug("i2c-dev: adapter [%s] registered as minor %d\n",
+ 		 adap->name, adap->nr);
+ 	return 0;
+-error:
+-	cdev_del(&i2c_dev->cdev);
+-error_cdev:
+-	put_i2c_dev(i2c_dev);
+-	return res;
+ }
+ 
+ static int i2cdev_detach_adapter(struct device *dev, void *dummy)
+@@ -680,9 +686,7 @@ static int i2cdev_detach_adapter(struct device *dev, void *dummy)
+ 	if (!i2c_dev) /* attach_adapter must have failed */
+ 		return 0;
+ 
+-	cdev_del(&i2c_dev->cdev);
+-	put_i2c_dev(i2c_dev);
+-	device_destroy(i2c_dev_class, MKDEV(I2C_MAJOR, adap->nr));
++	put_i2c_dev(i2c_dev, true);
+ 
+ 	pr_debug("i2c-dev: adapter [%s] unregistered\n", adap->name);
+ 	return 0;
+diff --git a/drivers/i2c/muxes/i2c-demux-pinctrl.c b/drivers/i2c/muxes/i2c-demux-pinctrl.c
+index 0e16490eb3a1..5365199a31f4 100644
+--- a/drivers/i2c/muxes/i2c-demux-pinctrl.c
++++ b/drivers/i2c/muxes/i2c-demux-pinctrl.c
+@@ -272,6 +272,7 @@ static int i2c_demux_pinctrl_probe(struct platform_device *pdev)
+ err_rollback_available:
+ 	device_remove_file(&pdev->dev, &dev_attr_available_masters);
+ err_rollback:
++	i2c_demux_deactivate_master(priv);
+ 	for (j = 0; j < i; j++) {
+ 		of_node_put(priv->chan[j].parent_np);
+ 		of_changeset_destroy(&priv->chan[j].chgset);
+diff --git a/drivers/iio/accel/sca3000.c b/drivers/iio/accel/sca3000.c
+index 66d768d971e1..6e429072e44a 100644
+--- a/drivers/iio/accel/sca3000.c
++++ b/drivers/iio/accel/sca3000.c
+@@ -980,7 +980,7 @@ static int sca3000_read_data(struct sca3000_state *st,
+ 	st->tx[0] = SCA3000_READ_REG(reg_address_high);
+ 	ret = spi_sync_transfer(st->us, xfer, ARRAY_SIZE(xfer));
+ 	if (ret) {
+-		dev_err(get_device(&st->us->dev), "problem reading register");
++		dev_err(&st->us->dev, "problem reading register\n");
+ 		return ret;
+ 	}
+ 
+diff --git a/drivers/iio/adc/stm32-adc.c b/drivers/iio/adc/stm32-adc.c
+index ae622ee6d08c..dfc3a306c667 100644
+--- a/drivers/iio/adc/stm32-adc.c
++++ b/drivers/iio/adc/stm32-adc.c
+@@ -1812,18 +1812,18 @@ static int stm32_adc_chan_of_init(struct iio_dev *indio_dev)
+ 	return 0;
+ }
+ 
+-static int stm32_adc_dma_request(struct iio_dev *indio_dev)
++static int stm32_adc_dma_request(struct device *dev, struct iio_dev *indio_dev)
+ {
+ 	struct stm32_adc *adc = iio_priv(indio_dev);
+ 	struct dma_slave_config config;
+ 	int ret;
+ 
+-	adc->dma_chan = dma_request_chan(&indio_dev->dev, "rx");
++	adc->dma_chan = dma_request_chan(dev, "rx");
+ 	if (IS_ERR(adc->dma_chan)) {
+ 		ret = PTR_ERR(adc->dma_chan);
+ 		if (ret != -ENODEV) {
+ 			if (ret != -EPROBE_DEFER)
+-				dev_err(&indio_dev->dev,
++				dev_err(dev,
+ 					"DMA channel request failed with %d\n",
+ 					ret);
+ 			return ret;
+@@ -1930,7 +1930,7 @@ static int stm32_adc_probe(struct platform_device *pdev)
+ 	if (ret < 0)
+ 		return ret;
+ 
+-	ret = stm32_adc_dma_request(indio_dev);
++	ret = stm32_adc_dma_request(dev, indio_dev);
+ 	if (ret < 0)
+ 		return ret;
+ 
+diff --git a/drivers/iio/adc/stm32-dfsdm-adc.c b/drivers/iio/adc/stm32-dfsdm-adc.c
+index 76a60d93fe23..506bf519f64c 100644
+--- a/drivers/iio/adc/stm32-dfsdm-adc.c
++++ b/drivers/iio/adc/stm32-dfsdm-adc.c
+@@ -62,7 +62,7 @@ enum sd_converter_type {
+ 
+ struct stm32_dfsdm_dev_data {
+ 	int type;
+-	int (*init)(struct iio_dev *indio_dev);
++	int (*init)(struct device *dev, struct iio_dev *indio_dev);
+ 	unsigned int num_channels;
+ 	const struct regmap_config *regmap_cfg;
+ };
+@@ -1365,11 +1365,12 @@ static void stm32_dfsdm_dma_release(struct iio_dev *indio_dev)
+ 	}
+ }
+ 
+-static int stm32_dfsdm_dma_request(struct iio_dev *indio_dev)
++static int stm32_dfsdm_dma_request(struct device *dev,
++				   struct iio_dev *indio_dev)
+ {
+ 	struct stm32_dfsdm_adc *adc = iio_priv(indio_dev);
+ 
+-	adc->dma_chan = dma_request_chan(&indio_dev->dev, "rx");
++	adc->dma_chan = dma_request_chan(dev, "rx");
+ 	if (IS_ERR(adc->dma_chan)) {
+ 		int ret = PTR_ERR(adc->dma_chan);
+ 
+@@ -1425,7 +1426,7 @@ static int stm32_dfsdm_adc_chan_init_one(struct iio_dev *indio_dev,
+ 					  &adc->dfsdm->ch_list[ch->channel]);
+ }
+ 
+-static int stm32_dfsdm_audio_init(struct iio_dev *indio_dev)
++static int stm32_dfsdm_audio_init(struct device *dev, struct iio_dev *indio_dev)
+ {
+ 	struct iio_chan_spec *ch;
+ 	struct stm32_dfsdm_adc *adc = iio_priv(indio_dev);
+@@ -1452,10 +1453,10 @@ static int stm32_dfsdm_audio_init(struct iio_dev *indio_dev)
+ 	indio_dev->num_channels = 1;
+ 	indio_dev->channels = ch;
+ 
+-	return stm32_dfsdm_dma_request(indio_dev);
++	return stm32_dfsdm_dma_request(dev, indio_dev);
+ }
+ 
+-static int stm32_dfsdm_adc_init(struct iio_dev *indio_dev)
++static int stm32_dfsdm_adc_init(struct device *dev, struct iio_dev *indio_dev)
+ {
+ 	struct iio_chan_spec *ch;
+ 	struct stm32_dfsdm_adc *adc = iio_priv(indio_dev);
+@@ -1499,17 +1500,17 @@ static int stm32_dfsdm_adc_init(struct iio_dev *indio_dev)
+ 	init_completion(&adc->completion);
+ 
+ 	/* Optionally request DMA */
+-	ret = stm32_dfsdm_dma_request(indio_dev);
++	ret = stm32_dfsdm_dma_request(dev, indio_dev);
+ 	if (ret) {
+ 		if (ret != -ENODEV) {
+ 			if (ret != -EPROBE_DEFER)
+-				dev_err(&indio_dev->dev,
++				dev_err(dev,
+ 					"DMA channel request failed with %d\n",
+ 					ret);
+ 			return ret;
+ 		}
+ 
+-		dev_dbg(&indio_dev->dev, "No DMA support\n");
++		dev_dbg(dev, "No DMA support\n");
+ 		return 0;
+ 	}
+ 
+@@ -1622,7 +1623,7 @@ static int stm32_dfsdm_adc_probe(struct platform_device *pdev)
+ 		adc->dfsdm->fl_list[adc->fl_id].sync_mode = val;
+ 
+ 	adc->dev_data = dev_data;
+-	ret = dev_data->init(iio);
++	ret = dev_data->init(dev, iio);
+ 	if (ret < 0)
+ 		return ret;
+ 
+diff --git a/drivers/iio/adc/ti-ads8344.c b/drivers/iio/adc/ti-ads8344.c
+index abe4b56c847c..8a8792010c20 100644
+--- a/drivers/iio/adc/ti-ads8344.c
++++ b/drivers/iio/adc/ti-ads8344.c
+@@ -32,16 +32,17 @@ struct ads8344 {
+ 	u8 rx_buf[3];
+ };
+ 
+-#define ADS8344_VOLTAGE_CHANNEL(chan, si)				\
++#define ADS8344_VOLTAGE_CHANNEL(chan, addr)				\
+ 	{								\
+ 		.type = IIO_VOLTAGE,					\
+ 		.indexed = 1,						\
+ 		.channel = chan,					\
+ 		.info_mask_separate = BIT(IIO_CHAN_INFO_RAW),		\
+ 		.info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE),	\
++		.address = addr,					\
+ 	}
+ 
+-#define ADS8344_VOLTAGE_CHANNEL_DIFF(chan1, chan2, si)			\
++#define ADS8344_VOLTAGE_CHANNEL_DIFF(chan1, chan2, addr)		\
+ 	{								\
+ 		.type = IIO_VOLTAGE,					\
+ 		.indexed = 1,						\
+@@ -50,6 +51,7 @@ struct ads8344 {
+ 		.differential = 1,					\
+ 		.info_mask_separate = BIT(IIO_CHAN_INFO_RAW),		\
+ 		.info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE),	\
++		.address = addr,					\
+ 	}
+ 
+ static const struct iio_chan_spec ads8344_channels[] = {
+@@ -105,7 +107,7 @@ static int ads8344_read_raw(struct iio_dev *iio,
+ 	switch (mask) {
+ 	case IIO_CHAN_INFO_RAW:
+ 		mutex_lock(&adc->lock);
+-		*value = ads8344_adc_conversion(adc, channel->scan_index,
++		*value = ads8344_adc_conversion(adc, channel->address,
+ 						channel->differential);
+ 		mutex_unlock(&adc->lock);
+ 		if (*value < 0)
+diff --git a/drivers/iio/dac/vf610_dac.c b/drivers/iio/dac/vf610_dac.c
+index 71f8a5c471c4..7f1e9317c3f3 100644
+--- a/drivers/iio/dac/vf610_dac.c
++++ b/drivers/iio/dac/vf610_dac.c
+@@ -223,6 +223,7 @@ static int vf610_dac_probe(struct platform_device *pdev)
+ 	return 0;
+ 
+ error_iio_device_register:
++	vf610_dac_exit(info);
+ 	clk_disable_unprepare(info->clk);
+ 
+ 	return ret;
+diff --git a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_shub.c b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_shub.c
+index 64ef07a30726..1cf98195f84d 100644
+--- a/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_shub.c
++++ b/drivers/iio/imu/st_lsm6dsx/st_lsm6dsx_shub.c
+@@ -544,8 +544,10 @@ st_lsm6dsx_shub_write_raw(struct iio_dev *iio_dev,
+ 
+ 			ref_sensor = iio_priv(hw->iio_devs[ST_LSM6DSX_ID_ACC]);
+ 			odr = st_lsm6dsx_check_odr(ref_sensor, val, &odr_val);
+-			if (odr < 0)
+-				return odr;
++			if (odr < 0) {
++				err = odr;
++				goto release;
++			}
+ 
+ 			sensor->ext_info.slv_odr = val;
+ 			sensor->odr = odr;
+@@ -557,6 +559,7 @@ st_lsm6dsx_shub_write_raw(struct iio_dev *iio_dev,
+ 		break;
+ 	}
+ 
++release:
+ 	iio_device_release_direct_mode(iio_dev);
+ 
+ 	return err;
+diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
+index 500d0a8c966f..2aa46a6de172 100644
+--- a/drivers/iommu/amd_iommu.c
++++ b/drivers/iommu/amd_iommu.c
+@@ -127,7 +127,8 @@ static inline int get_acpihid_device_id(struct device *dev,
+ 		return -ENODEV;
+ 
+ 	list_for_each_entry(p, &acpihid_map, list) {
+-		if (acpi_dev_hid_uid_match(adev, p->hid, p->uid)) {
++		if (acpi_dev_hid_uid_match(adev, p->hid,
++					   p->uid[0] ? p->uid : NULL)) {
+ 			if (entry)
+ 				*entry = p;
+ 			return p->devid;
+@@ -1499,8 +1500,19 @@ static u64 *alloc_pte(struct protection_domain *domain,
+ 	amd_iommu_domain_get_pgtable(domain, &pgtable);
+ 
+ 	while (address > PM_LEVEL_SIZE(pgtable.mode)) {
+-		*updated = increase_address_space(domain, address, gfp) || *updated;
++		bool upd = increase_address_space(domain, address, gfp);
++
++		/* Read new values to check if update was successful */
+ 		amd_iommu_domain_get_pgtable(domain, &pgtable);
++
++		/*
++		 * Return an error if there is no memory to update the
++		 * page-table.
++		 */
++		if (!upd && (address > PM_LEVEL_SIZE(pgtable.mode)))
++			return NULL;
++
++		*updated = *updated || upd;
+ 	}
+ 
+ 
+@@ -2333,6 +2345,7 @@ static void update_domain(struct protection_domain *domain)
+ 
+ 	/* Flush domain TLB(s) and wait for completion */
+ 	domain_flush_tlb_pde(domain);
++	domain_flush_complete(domain);
+ }
+ 
+ int __init amd_iommu_init_api(void)
+diff --git a/drivers/iommu/amd_iommu_init.c b/drivers/iommu/amd_iommu_init.c
+index 2b9a67ecc6ac..5b81fd16f5fa 100644
+--- a/drivers/iommu/amd_iommu_init.c
++++ b/drivers/iommu/amd_iommu_init.c
+@@ -1329,8 +1329,8 @@ static int __init init_iommu_from_acpi(struct amd_iommu *iommu,
+ 		}
+ 		case IVHD_DEV_ACPI_HID: {
+ 			u16 devid;
+-			u8 hid[ACPIHID_HID_LEN] = {0};
+-			u8 uid[ACPIHID_UID_LEN] = {0};
++			u8 hid[ACPIHID_HID_LEN];
++			u8 uid[ACPIHID_UID_LEN];
+ 			int ret;
+ 
+ 			if (h->type != 0x40) {
+@@ -1347,6 +1347,7 @@ static int __init init_iommu_from_acpi(struct amd_iommu *iommu,
+ 				break;
+ 			}
+ 
++			uid[0] = '\0';
+ 			switch (e->uidf) {
+ 			case UID_NOT_PRESENT:
+ 
+@@ -1361,8 +1362,8 @@ static int __init init_iommu_from_acpi(struct amd_iommu *iommu,
+ 				break;
+ 			case UID_IS_CHARACTER:
+ 
+-				memcpy(uid, (u8 *)(&e->uid), ACPIHID_UID_LEN - 1);
+-				uid[ACPIHID_UID_LEN - 1] = '\0';
++				memcpy(uid, &e->uid, e->uidl);
++				uid[e->uidl] = '\0';
+ 
+ 				break;
+ 			default:
+diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
+index 8d2477941fd9..22b28076d48e 100644
+--- a/drivers/iommu/iommu.c
++++ b/drivers/iommu/iommu.c
+@@ -692,6 +692,15 @@ out:
+ 	return ret;
+ }
+ 
++static bool iommu_is_attach_deferred(struct iommu_domain *domain,
++				     struct device *dev)
++{
++	if (domain->ops->is_attach_deferred)
++		return domain->ops->is_attach_deferred(domain, dev);
++
++	return false;
++}
++
+ /**
+  * iommu_group_add_device - add a device to an iommu group
+  * @group: the group into which to add the device (reference should be held)
+@@ -746,7 +755,7 @@ rename:
+ 
+ 	mutex_lock(&group->mutex);
+ 	list_add_tail(&device->list, &group->devices);
+-	if (group->domain)
++	if (group->domain  && !iommu_is_attach_deferred(group->domain, dev))
+ 		ret = __iommu_attach_device(group->domain, dev);
+ 	mutex_unlock(&group->mutex);
+ 	if (ret)
+@@ -1652,9 +1661,6 @@ static int __iommu_attach_device(struct iommu_domain *domain,
+ 				 struct device *dev)
+ {
+ 	int ret;
+-	if ((domain->ops->is_attach_deferred != NULL) &&
+-	    domain->ops->is_attach_deferred(domain, dev))
+-		return 0;
+ 
+ 	if (unlikely(domain->ops->attach_dev == NULL))
+ 		return -ENODEV;
+@@ -1726,8 +1732,7 @@ EXPORT_SYMBOL_GPL(iommu_sva_unbind_gpasid);
+ static void __iommu_detach_device(struct iommu_domain *domain,
+ 				  struct device *dev)
+ {
+-	if ((domain->ops->is_attach_deferred != NULL) &&
+-	    domain->ops->is_attach_deferred(domain, dev))
++	if (iommu_is_attach_deferred(domain, dev))
+ 		return;
+ 
+ 	if (unlikely(domain->ops->detach_dev == NULL))
+diff --git a/drivers/ipack/carriers/tpci200.c b/drivers/ipack/carriers/tpci200.c
+index 23445ebfda5c..ec71063fff76 100644
+--- a/drivers/ipack/carriers/tpci200.c
++++ b/drivers/ipack/carriers/tpci200.c
+@@ -306,6 +306,7 @@ static int tpci200_register(struct tpci200_board *tpci200)
+ 			"(bn 0x%X, sn 0x%X) failed to map driver user space!",
+ 			tpci200->info->pdev->bus->number,
+ 			tpci200->info->pdev->devfn);
++		res = -ENOMEM;
+ 		goto out_release_mem8_space;
+ 	}
+ 
+diff --git a/drivers/misc/cardreader/rtsx_pcr.c b/drivers/misc/cardreader/rtsx_pcr.c
+index 06038b325b02..55da6428ceb0 100644
+--- a/drivers/misc/cardreader/rtsx_pcr.c
++++ b/drivers/misc/cardreader/rtsx_pcr.c
+@@ -142,6 +142,9 @@ static void rtsx_comm_pm_full_on(struct rtsx_pcr *pcr)
+ 
+ 	rtsx_disable_aspm(pcr);
+ 
++	/* Fixes DMA transfer timout issue after disabling ASPM on RTS5260 */
++	msleep(1);
++
+ 	if (option->ltr_enabled)
+ 		rtsx_set_ltr_latency(pcr, option->ltr_active_latency);
+ 
+diff --git a/drivers/misc/mei/client.c b/drivers/misc/mei/client.c
+index 1e3edbbacb1e..c6b163060c76 100644
+--- a/drivers/misc/mei/client.c
++++ b/drivers/misc/mei/client.c
+@@ -266,6 +266,7 @@ void mei_me_cl_rm_by_uuid(struct mei_device *dev, const uuid_le *uuid)
+ 	down_write(&dev->me_clients_rwsem);
+ 	me_cl = __mei_me_cl_by_uuid(dev, uuid);
+ 	__mei_me_cl_del(dev, me_cl);
++	mei_me_cl_put(me_cl);
+ 	up_write(&dev->me_clients_rwsem);
+ }
+ 
+@@ -287,6 +288,7 @@ void mei_me_cl_rm_by_uuid_id(struct mei_device *dev, const uuid_le *uuid, u8 id)
+ 	down_write(&dev->me_clients_rwsem);
+ 	me_cl = __mei_me_cl_by_uuid_id(dev, uuid, id);
+ 	__mei_me_cl_del(dev, me_cl);
++	mei_me_cl_put(me_cl);
+ 	up_write(&dev->me_clients_rwsem);
+ }
+ 
+diff --git a/drivers/mtd/mtdcore.c b/drivers/mtd/mtdcore.c
+index 5fac4355b9c2..559b6930b6f6 100644
+--- a/drivers/mtd/mtdcore.c
++++ b/drivers/mtd/mtdcore.c
+@@ -551,7 +551,7 @@ static int mtd_nvmem_add(struct mtd_info *mtd)
+ 
+ 	config.id = -1;
+ 	config.dev = &mtd->dev;
+-	config.name = mtd->name;
++	config.name = dev_name(&mtd->dev);
+ 	config.owner = THIS_MODULE;
+ 	config.reg_read = mtd_nvmem_reg_read;
+ 	config.size = mtd->size;
+diff --git a/drivers/mtd/nand/spi/core.c b/drivers/mtd/nand/spi/core.c
+index 8dda51bbdd11..0d21c68bfe24 100644
+--- a/drivers/mtd/nand/spi/core.c
++++ b/drivers/mtd/nand/spi/core.c
+@@ -1049,6 +1049,10 @@ static int spinand_init(struct spinand_device *spinand)
+ 
+ 	mtd->oobavail = ret;
+ 
++	/* Propagate ECC information to mtd_info */
++	mtd->ecc_strength = nand->eccreq.strength;
++	mtd->ecc_step_size = nand->eccreq.step_size;
++
+ 	return 0;
+ 
+ err_cleanup_nanddev:
+diff --git a/drivers/mtd/ubi/debug.c b/drivers/mtd/ubi/debug.c
+index 54646c2c2744..ac2bdba8bb1a 100644
+--- a/drivers/mtd/ubi/debug.c
++++ b/drivers/mtd/ubi/debug.c
+@@ -393,9 +393,6 @@ static void *eraseblk_count_seq_start(struct seq_file *s, loff_t *pos)
+ {
+ 	struct ubi_device *ubi = s->private;
+ 
+-	if (*pos == 0)
+-		return SEQ_START_TOKEN;
+-
+ 	if (*pos < ubi->peb_count)
+ 		return pos;
+ 
+@@ -409,8 +406,6 @@ static void *eraseblk_count_seq_next(struct seq_file *s, void *v, loff_t *pos)
+ {
+ 	struct ubi_device *ubi = s->private;
+ 
+-	if (v == SEQ_START_TOKEN)
+-		return pos;
+ 	(*pos)++;
+ 
+ 	if (*pos < ubi->peb_count)
+@@ -432,11 +427,8 @@ static int eraseblk_count_seq_show(struct seq_file *s, void *iter)
+ 	int err;
+ 
+ 	/* If this is the start, print a header */
+-	if (iter == SEQ_START_TOKEN) {
+-		seq_puts(s,
+-			 "physical_block_number\terase_count\tblock_status\tread_status\n");
+-		return 0;
+-	}
++	if (*block_number == 0)
++		seq_puts(s, "physical_block_number\terase_count\n");
+ 
+ 	err = ubi_io_is_bad(ubi, *block_number);
+ 	if (err)
+diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.h b/drivers/net/ethernet/amazon/ena/ena_netdev.h
+index 8795e0b1dc3c..8984aa211112 100644
+--- a/drivers/net/ethernet/amazon/ena/ena_netdev.h
++++ b/drivers/net/ethernet/amazon/ena/ena_netdev.h
+@@ -69,7 +69,7 @@
+  * 16kB.
+  */
+ #if PAGE_SIZE > SZ_16K
+-#define ENA_PAGE_SIZE SZ_16K
++#define ENA_PAGE_SIZE (_AC(SZ_16K, UL))
+ #else
+ #define ENA_PAGE_SIZE PAGE_SIZE
+ #endif
+diff --git a/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c b/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c
+index 78b6f3248756..e0625c67eed3 100644
+--- a/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c
++++ b/drivers/net/ethernet/aquantia/atlantic/aq_pci_func.c
+@@ -56,7 +56,7 @@ static const struct aq_board_revision_s hw_atl_boards[] = {
+ 	{ AQ_DEVICE_ID_D108,	AQ_HWREV_2,	&hw_atl_ops_b0, &hw_atl_b0_caps_aqc108, },
+ 	{ AQ_DEVICE_ID_D109,	AQ_HWREV_2,	&hw_atl_ops_b0, &hw_atl_b0_caps_aqc109, },
+ 
+-	{ AQ_DEVICE_ID_AQC100,	AQ_HWREV_ANY,	&hw_atl_ops_b1, &hw_atl_b0_caps_aqc107, },
++	{ AQ_DEVICE_ID_AQC100,	AQ_HWREV_ANY,	&hw_atl_ops_b1, &hw_atl_b0_caps_aqc100, },
+ 	{ AQ_DEVICE_ID_AQC107,	AQ_HWREV_ANY,	&hw_atl_ops_b1, &hw_atl_b0_caps_aqc107, },
+ 	{ AQ_DEVICE_ID_AQC108,	AQ_HWREV_ANY,	&hw_atl_ops_b1, &hw_atl_b0_caps_aqc108, },
+ 	{ AQ_DEVICE_ID_AQC109,	AQ_HWREV_ANY,	&hw_atl_ops_b1, &hw_atl_b0_caps_aqc109, },
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
+index 4bd33245bad6..3de549c6c693 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -2189,7 +2189,8 @@ static void __ibmvnic_reset(struct work_struct *work)
+ 				rc = do_hard_reset(adapter, rwi, reset_state);
+ 				rtnl_unlock();
+ 			}
+-		} else {
++		} else if (!(rwi->reset_reason == VNIC_RESET_FATAL &&
++				adapter->from_passive_init)) {
+ 			rc = do_reset(adapter, rwi, reset_state);
+ 		}
+ 		kfree(rwi);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index 7da18c9afa01..d564459290ce 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -3988,7 +3988,7 @@ static int stmmac_set_features(struct net_device *netdev,
+ /**
+  *  stmmac_interrupt - main ISR
+  *  @irq: interrupt number.
+- *  @dev_id: to pass the net device pointer.
++ *  @dev_id: to pass the net device pointer (must be valid).
+  *  Description: this is the main driver interrupt service routine.
+  *  It can call:
+  *  o DMA service routine (to manage incoming frame reception and transmission
+@@ -4012,11 +4012,6 @@ static irqreturn_t stmmac_interrupt(int irq, void *dev_id)
+ 	if (priv->irq_wake)
+ 		pm_wakeup_event(priv->device, 0);
+ 
+-	if (unlikely(!dev)) {
+-		netdev_err(priv->dev, "%s: invalid dev pointer\n", __func__);
+-		return IRQ_NONE;
+-	}
+-
+ 	/* Check if adapter is up */
+ 	if (test_bit(STMMAC_DOWN, &priv->state))
+ 		return IRQ_HANDLED;
+diff --git a/drivers/net/gtp.c b/drivers/net/gtp.c
+index 672cd2caf2fb..21640a035d7d 100644
+--- a/drivers/net/gtp.c
++++ b/drivers/net/gtp.c
+@@ -1169,11 +1169,11 @@ out_unlock:
+ static struct genl_family gtp_genl_family;
+ 
+ static int gtp_genl_fill_info(struct sk_buff *skb, u32 snd_portid, u32 snd_seq,
+-			      u32 type, struct pdp_ctx *pctx)
++			      int flags, u32 type, struct pdp_ctx *pctx)
+ {
+ 	void *genlh;
+ 
+-	genlh = genlmsg_put(skb, snd_portid, snd_seq, &gtp_genl_family, 0,
++	genlh = genlmsg_put(skb, snd_portid, snd_seq, &gtp_genl_family, flags,
+ 			    type);
+ 	if (genlh == NULL)
+ 		goto nlmsg_failure;
+@@ -1227,8 +1227,8 @@ static int gtp_genl_get_pdp(struct sk_buff *skb, struct genl_info *info)
+ 		goto err_unlock;
+ 	}
+ 
+-	err = gtp_genl_fill_info(skb2, NETLINK_CB(skb).portid,
+-				 info->snd_seq, info->nlhdr->nlmsg_type, pctx);
++	err = gtp_genl_fill_info(skb2, NETLINK_CB(skb).portid, info->snd_seq,
++				 0, info->nlhdr->nlmsg_type, pctx);
+ 	if (err < 0)
+ 		goto err_unlock_free;
+ 
+@@ -1271,6 +1271,7 @@ static int gtp_genl_dump_pdp(struct sk_buff *skb,
+ 				    gtp_genl_fill_info(skb,
+ 					    NETLINK_CB(cb->skb).portid,
+ 					    cb->nlh->nlmsg_seq,
++					    NLM_F_MULTI,
+ 					    cb->nlh->nlmsg_type, pctx)) {
+ 					cb->args[0] = i;
+ 					cb->args[1] = j;
+diff --git a/drivers/pinctrl/qcom/pinctrl-msm.c b/drivers/pinctrl/qcom/pinctrl-msm.c
+index 9f1c9951949e..14a8f8fa0ea3 100644
+--- a/drivers/pinctrl/qcom/pinctrl-msm.c
++++ b/drivers/pinctrl/qcom/pinctrl-msm.c
+@@ -1010,6 +1010,29 @@ static void msm_gpio_irq_relres(struct irq_data *d)
+ 	module_put(gc->owner);
+ }
+ 
++static int msm_gpio_irq_set_affinity(struct irq_data *d,
++				const struct cpumask *dest, bool force)
++{
++	struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
++	struct msm_pinctrl *pctrl = gpiochip_get_data(gc);
++
++	if (d->parent_data && test_bit(d->hwirq, pctrl->skip_wake_irqs))
++		return irq_chip_set_affinity_parent(d, dest, force);
++
++	return 0;
++}
++
++static int msm_gpio_irq_set_vcpu_affinity(struct irq_data *d, void *vcpu_info)
++{
++	struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
++	struct msm_pinctrl *pctrl = gpiochip_get_data(gc);
++
++	if (d->parent_data && test_bit(d->hwirq, pctrl->skip_wake_irqs))
++		return irq_chip_set_vcpu_affinity_parent(d, vcpu_info);
++
++	return 0;
++}
++
+ static void msm_gpio_irq_handler(struct irq_desc *desc)
+ {
+ 	struct gpio_chip *gc = irq_desc_get_handler_data(desc);
+@@ -1108,6 +1131,8 @@ static int msm_gpio_init(struct msm_pinctrl *pctrl)
+ 	pctrl->irq_chip.irq_set_wake = msm_gpio_irq_set_wake;
+ 	pctrl->irq_chip.irq_request_resources = msm_gpio_irq_reqres;
+ 	pctrl->irq_chip.irq_release_resources = msm_gpio_irq_relres;
++	pctrl->irq_chip.irq_set_affinity = msm_gpio_irq_set_affinity;
++	pctrl->irq_chip.irq_set_vcpu_affinity = msm_gpio_irq_set_vcpu_affinity;
+ 
+ 	np = of_parse_phandle(pctrl->dev->of_node, "wakeup-parent", 0);
+ 	if (np) {
+diff --git a/drivers/platform/x86/asus-nb-wmi.c b/drivers/platform/x86/asus-nb-wmi.c
+index 6f12747a359a..c4404d9c1de4 100644
+--- a/drivers/platform/x86/asus-nb-wmi.c
++++ b/drivers/platform/x86/asus-nb-wmi.c
+@@ -515,9 +515,33 @@ static struct asus_wmi_driver asus_nb_wmi_driver = {
+ 	.detect_quirks = asus_nb_wmi_quirks,
+ };
+ 
++static const struct dmi_system_id asus_nb_wmi_blacklist[] __initconst = {
++	{
++		/*
++		 * asus-nb-wm adds no functionality. The T100TA has a detachable
++		 * USB kbd, so no hotkeys and it has no WMI rfkill; and loading
++		 * asus-nb-wm causes the camera LED to turn and _stay_ on.
++		 */
++		.matches = {
++			DMI_EXACT_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "T100TA"),
++		},
++	},
++	{
++		/* The Asus T200TA has the same issue as the T100TA */
++		.matches = {
++			DMI_EXACT_MATCH(DMI_SYS_VENDOR, "ASUSTeK COMPUTER INC."),
++			DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "T200TA"),
++		},
++	},
++	{} /* Terminating entry */
++};
+ 
+ static int __init asus_nb_wmi_init(void)
+ {
++	if (dmi_check_system(asus_nb_wmi_blacklist))
++		return -ENODEV;
++
+ 	return asus_wmi_register_driver(&asus_nb_wmi_driver);
+ }
+ 
+diff --git a/drivers/rapidio/devices/rio_mport_cdev.c b/drivers/rapidio/devices/rio_mport_cdev.c
+index 8155f59ece38..10af330153b5 100644
+--- a/drivers/rapidio/devices/rio_mport_cdev.c
++++ b/drivers/rapidio/devices/rio_mport_cdev.c
+@@ -877,6 +877,11 @@ rio_dma_transfer(struct file *filp, u32 transfer_mode,
+ 				rmcd_error("pinned %ld out of %ld pages",
+ 					   pinned, nr_pages);
+ 			ret = -EFAULT;
++			/*
++			 * Set nr_pages up to mean "how many pages to unpin, in
++			 * the error handler:
++			 */
++			nr_pages = pinned;
+ 			goto err_pg;
+ 		}
+ 
+diff --git a/drivers/scsi/ibmvscsi/ibmvscsi.c b/drivers/scsi/ibmvscsi/ibmvscsi.c
+index 7f66a7783209..59f0f1030c54 100644
+--- a/drivers/scsi/ibmvscsi/ibmvscsi.c
++++ b/drivers/scsi/ibmvscsi/ibmvscsi.c
+@@ -2320,16 +2320,12 @@ static int ibmvscsi_probe(struct vio_dev *vdev, const struct vio_device_id *id)
+ static int ibmvscsi_remove(struct vio_dev *vdev)
+ {
+ 	struct ibmvscsi_host_data *hostdata = dev_get_drvdata(&vdev->dev);
+-	unsigned long flags;
+ 
+ 	srp_remove_host(hostdata->host);
+ 	scsi_remove_host(hostdata->host);
+ 
+ 	purge_requests(hostdata, DID_ERROR);
+-
+-	spin_lock_irqsave(hostdata->host->host_lock, flags);
+ 	release_event_pool(&hostdata->pool, hostdata);
+-	spin_unlock_irqrestore(hostdata->host->host_lock, flags);
+ 
+ 	ibmvscsi_release_crq_queue(&hostdata->queue, hostdata,
+ 					max_events);
+diff --git a/drivers/scsi/qla2xxx/qla_attr.c b/drivers/scsi/qla2xxx/qla_attr.c
+index d7e7043f9eab..e3c45edd0e18 100644
+--- a/drivers/scsi/qla2xxx/qla_attr.c
++++ b/drivers/scsi/qla2xxx/qla_attr.c
+@@ -1777,9 +1777,6 @@ qla2x00_port_speed_show(struct device *dev, struct device_attribute *attr,
+ 		return -EINVAL;
+ 	}
+ 
+-	ql_log(ql_log_info, vha, 0x70d6,
+-	    "port speed:%d\n", ha->link_data_rate);
+-
+ 	return scnprintf(buf, PAGE_SIZE, "%s\n", spd[ha->link_data_rate]);
+ }
+ 
+@@ -2928,11 +2925,11 @@ qla24xx_vport_delete(struct fc_vport *fc_vport)
+ 	    test_bit(FCPORT_UPDATE_NEEDED, &vha->dpc_flags))
+ 		msleep(1000);
+ 
+-	qla_nvme_delete(vha);
+ 
+ 	qla24xx_disable_vp(vha);
+ 	qla2x00_wait_for_sess_deletion(vha);
+ 
++	qla_nvme_delete(vha);
+ 	vha->flags.delete_progress = 1;
+ 
+ 	qlt_remove_target(ha, vha);
+diff --git a/drivers/scsi/qla2xxx/qla_mbx.c b/drivers/scsi/qla2xxx/qla_mbx.c
+index 9e09964f5c0e..7b341e41bb85 100644
+--- a/drivers/scsi/qla2xxx/qla_mbx.c
++++ b/drivers/scsi/qla2xxx/qla_mbx.c
+@@ -3117,7 +3117,7 @@ qla24xx_abort_command(srb_t *sp)
+ 	ql_dbg(ql_dbg_mbx + ql_dbg_verbose, vha, 0x108c,
+ 	    "Entered %s.\n", __func__);
+ 
+-	if (vha->flags.qpairs_available && sp->qpair)
++	if (sp->qpair)
+ 		req = sp->qpair->req;
+ 	else
+ 		return QLA_FUNCTION_FAILED;
+diff --git a/drivers/staging/greybus/uart.c b/drivers/staging/greybus/uart.c
+index 55c51143bb09..4ffb334cd5cd 100644
+--- a/drivers/staging/greybus/uart.c
++++ b/drivers/staging/greybus/uart.c
+@@ -537,9 +537,9 @@ static void gb_tty_set_termios(struct tty_struct *tty,
+ 	}
+ 
+ 	if (C_CRTSCTS(tty) && C_BAUD(tty) != B0)
+-		newline.flow_control |= GB_SERIAL_AUTO_RTSCTS_EN;
++		newline.flow_control = GB_SERIAL_AUTO_RTSCTS_EN;
+ 	else
+-		newline.flow_control &= ~GB_SERIAL_AUTO_RTSCTS_EN;
++		newline.flow_control = 0;
+ 
+ 	if (memcmp(&gb_tty->line_coding, &newline, sizeof(newline))) {
+ 		memcpy(&gb_tty->line_coding, &newline, sizeof(newline));
+diff --git a/drivers/staging/iio/resolver/ad2s1210.c b/drivers/staging/iio/resolver/ad2s1210.c
+index 4b25a3a314ed..ed404355ea4c 100644
+--- a/drivers/staging/iio/resolver/ad2s1210.c
++++ b/drivers/staging/iio/resolver/ad2s1210.c
+@@ -130,17 +130,24 @@ static int ad2s1210_config_write(struct ad2s1210_state *st, u8 data)
+ static int ad2s1210_config_read(struct ad2s1210_state *st,
+ 				unsigned char address)
+ {
+-	struct spi_transfer xfer = {
+-		.len = 2,
+-		.rx_buf = st->rx,
+-		.tx_buf = st->tx,
++	struct spi_transfer xfers[] = {
++		{
++			.len = 1,
++			.rx_buf = &st->rx[0],
++			.tx_buf = &st->tx[0],
++			.cs_change = 1,
++		}, {
++			.len = 1,
++			.rx_buf = &st->rx[1],
++			.tx_buf = &st->tx[1],
++		},
+ 	};
+ 	int ret = 0;
+ 
+ 	ad2s1210_set_mode(MOD_CONFIG, st);
+ 	st->tx[0] = address | AD2S1210_MSB_IS_HIGH;
+ 	st->tx[1] = AD2S1210_REG_FAULT;
+-	ret = spi_sync_transfer(st->sdev, &xfer, 1);
++	ret = spi_sync_transfer(st->sdev, xfers, 2);
+ 	if (ret < 0)
+ 		return ret;
+ 
+diff --git a/drivers/staging/kpc2000/kpc2000/core.c b/drivers/staging/kpc2000/kpc2000/core.c
+index 7b00d7069e21..358d7b2f4ad1 100644
+--- a/drivers/staging/kpc2000/kpc2000/core.c
++++ b/drivers/staging/kpc2000/kpc2000/core.c
+@@ -298,7 +298,6 @@ static int kp2000_pcie_probe(struct pci_dev *pdev,
+ {
+ 	int err = 0;
+ 	struct kp2000_device *pcard;
+-	int rv;
+ 	unsigned long reg_bar_phys_addr;
+ 	unsigned long reg_bar_phys_len;
+ 	unsigned long dma_bar_phys_addr;
+@@ -445,11 +444,11 @@ static int kp2000_pcie_probe(struct pci_dev *pdev,
+ 	if (err < 0)
+ 		goto err_release_dma;
+ 
+-	rv = request_irq(pcard->pdev->irq, kp2000_irq_handler, IRQF_SHARED,
+-			 pcard->name, pcard);
+-	if (rv) {
++	err = request_irq(pcard->pdev->irq, kp2000_irq_handler, IRQF_SHARED,
++			  pcard->name, pcard);
++	if (err) {
+ 		dev_err(&pcard->pdev->dev,
+-			"%s: failed to request_irq: %d\n", __func__, rv);
++			"%s: failed to request_irq: %d\n", __func__, err);
+ 		goto err_disable_msi;
+ 	}
+ 
+diff --git a/drivers/staging/wfx/scan.c b/drivers/staging/wfx/scan.c
+index 6e1e50048651..9aa14331affd 100644
+--- a/drivers/staging/wfx/scan.c
++++ b/drivers/staging/wfx/scan.c
+@@ -57,8 +57,10 @@ static int send_scan_req(struct wfx_vif *wvif,
+ 	wvif->scan_abort = false;
+ 	reinit_completion(&wvif->scan_complete);
+ 	timeout = hif_scan(wvif, req, start_idx, i - start_idx);
+-	if (timeout < 0)
++	if (timeout < 0) {
++		wfx_tx_unlock(wvif->wdev);
+ 		return timeout;
++	}
+ 	ret = wait_for_completion_timeout(&wvif->scan_complete, timeout);
+ 	if (req->channels[start_idx]->max_power != wvif->vif->bss_conf.txpower)
+ 		hif_set_output_power(wvif, wvif->vif->bss_conf.txpower);
+diff --git a/drivers/target/target_core_transport.c b/drivers/target/target_core_transport.c
+index 0ae9e60fc4d5..61486e5abee4 100644
+--- a/drivers/target/target_core_transport.c
++++ b/drivers/target/target_core_transport.c
+@@ -3349,6 +3349,7 @@ static void target_tmr_work(struct work_struct *work)
+ 
+ 	cmd->se_tfo->queue_tm_rsp(cmd);
+ 
++	transport_lun_remove_cmd(cmd);
+ 	transport_cmd_check_stop_to_fabric(cmd);
+ 	return;
+ 
+diff --git a/drivers/tty/serial/sifive.c b/drivers/tty/serial/sifive.c
+index d5f81b98e4d7..38133eba83a8 100644
+--- a/drivers/tty/serial/sifive.c
++++ b/drivers/tty/serial/sifive.c
+@@ -840,6 +840,7 @@ console_initcall(sifive_console_init);
+ 
+ static void __ssp_add_console_port(struct sifive_serial_port *ssp)
+ {
++	spin_lock_init(&ssp->port.lock);
+ 	sifive_serial_console_ports[ssp->port.line] = ssp;
+ }
+ 
+diff --git a/drivers/usb/core/message.c b/drivers/usb/core/message.c
+index 02eaac7e1e34..a1ac2f0723b0 100644
+--- a/drivers/usb/core/message.c
++++ b/drivers/usb/core/message.c
+@@ -1143,11 +1143,11 @@ void usb_disable_endpoint(struct usb_device *dev, unsigned int epaddr,
+ 
+ 	if (usb_endpoint_out(epaddr)) {
+ 		ep = dev->ep_out[epnum];
+-		if (reset_hardware)
++		if (reset_hardware && epnum != 0)
+ 			dev->ep_out[epnum] = NULL;
+ 	} else {
+ 		ep = dev->ep_in[epnum];
+-		if (reset_hardware)
++		if (reset_hardware && epnum != 0)
+ 			dev->ep_in[epnum] = NULL;
+ 	}
+ 	if (ep) {
+diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
+index bb3f63386b47..53294c2f8cff 100644
+--- a/drivers/vhost/vsock.c
++++ b/drivers/vhost/vsock.c
+@@ -181,14 +181,14 @@ vhost_transport_do_send_pkt(struct vhost_vsock *vsock,
+ 			break;
+ 		}
+ 
+-		vhost_add_used(vq, head, sizeof(pkt->hdr) + payload_len);
+-		added = true;
+-
+-		/* Deliver to monitoring devices all correctly transmitted
+-		 * packets.
++		/* Deliver to monitoring devices all packets that we
++		 * will transmit.
+ 		 */
+ 		virtio_transport_deliver_tap_pkt(pkt);
+ 
++		vhost_add_used(vq, head, sizeof(pkt->hdr) + payload_len);
++		added = true;
++
+ 		pkt->off += payload_len;
+ 		total_len += payload_len;
+ 
+diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
+index 44375a22307b..341458fd95ca 100644
+--- a/drivers/virtio/virtio_balloon.c
++++ b/drivers/virtio/virtio_balloon.c
+@@ -14,7 +14,6 @@
+ #include <linux/slab.h>
+ #include <linux/module.h>
+ #include <linux/balloon_compaction.h>
+-#include <linux/oom.h>
+ #include <linux/wait.h>
+ #include <linux/mm.h>
+ #include <linux/mount.h>
+@@ -28,9 +27,7 @@
+  */
+ #define VIRTIO_BALLOON_PAGES_PER_PAGE (unsigned)(PAGE_SIZE >> VIRTIO_BALLOON_PFN_SHIFT)
+ #define VIRTIO_BALLOON_ARRAY_PFNS_MAX 256
+-/* Maximum number of (4k) pages to deflate on OOM notifications. */
+-#define VIRTIO_BALLOON_OOM_NR_PAGES 256
+-#define VIRTIO_BALLOON_OOM_NOTIFY_PRIORITY 80
++#define VIRTBALLOON_OOM_NOTIFY_PRIORITY 80
+ 
+ #define VIRTIO_BALLOON_FREE_PAGE_ALLOC_FLAG (__GFP_NORETRY | __GFP_NOWARN | \
+ 					     __GFP_NOMEMALLOC)
+@@ -115,11 +112,8 @@ struct virtio_balloon {
+ 	/* Memory statistics */
+ 	struct virtio_balloon_stat stats[VIRTIO_BALLOON_S_NR];
+ 
+-	/* Shrinker to return free pages - VIRTIO_BALLOON_F_FREE_PAGE_HINT */
++	/* To register a shrinker to shrink memory upon memory pressure */
+ 	struct shrinker shrinker;
+-
+-	/* OOM notifier to deflate on OOM - VIRTIO_BALLOON_F_DEFLATE_ON_OOM */
+-	struct notifier_block oom_nb;
+ };
+ 
+ static struct virtio_device_id id_table[] = {
+@@ -794,13 +788,50 @@ static unsigned long shrink_free_pages(struct virtio_balloon *vb,
+ 	return blocks_freed * VIRTIO_BALLOON_HINT_BLOCK_PAGES;
+ }
+ 
++static unsigned long leak_balloon_pages(struct virtio_balloon *vb,
++                                          unsigned long pages_to_free)
++{
++	return leak_balloon(vb, pages_to_free * VIRTIO_BALLOON_PAGES_PER_PAGE) /
++		VIRTIO_BALLOON_PAGES_PER_PAGE;
++}
++
++static unsigned long shrink_balloon_pages(struct virtio_balloon *vb,
++					  unsigned long pages_to_free)
++{
++	unsigned long pages_freed = 0;
++
++	/*
++	 * One invocation of leak_balloon can deflate at most
++	 * VIRTIO_BALLOON_ARRAY_PFNS_MAX balloon pages, so we call it
++	 * multiple times to deflate pages till reaching pages_to_free.
++	 */
++	while (vb->num_pages && pages_freed < pages_to_free)
++		pages_freed += leak_balloon_pages(vb,
++						  pages_to_free - pages_freed);
++
++	update_balloon_size(vb);
++
++	return pages_freed;
++}
++
+ static unsigned long virtio_balloon_shrinker_scan(struct shrinker *shrinker,
+ 						  struct shrink_control *sc)
+ {
++	unsigned long pages_to_free, pages_freed = 0;
+ 	struct virtio_balloon *vb = container_of(shrinker,
+ 					struct virtio_balloon, shrinker);
+ 
+-	return shrink_free_pages(vb, sc->nr_to_scan);
++	pages_to_free = sc->nr_to_scan;
++
++	if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT))
++		pages_freed = shrink_free_pages(vb, pages_to_free);
++
++	if (pages_freed >= pages_to_free)
++		return pages_freed;
++
++	pages_freed += shrink_balloon_pages(vb, pages_to_free - pages_freed);
++
++	return pages_freed;
+ }
+ 
+ static unsigned long virtio_balloon_shrinker_count(struct shrinker *shrinker,
+@@ -808,22 +839,26 @@ static unsigned long virtio_balloon_shrinker_count(struct shrinker *shrinker,
+ {
+ 	struct virtio_balloon *vb = container_of(shrinker,
+ 					struct virtio_balloon, shrinker);
++	unsigned long count;
++
++	count = vb->num_pages / VIRTIO_BALLOON_PAGES_PER_PAGE;
++	count += vb->num_free_page_blocks * VIRTIO_BALLOON_HINT_BLOCK_PAGES;
+ 
+-	return vb->num_free_page_blocks * VIRTIO_BALLOON_HINT_BLOCK_PAGES;
++	return count;
+ }
+ 
+-static int virtio_balloon_oom_notify(struct notifier_block *nb,
+-				     unsigned long dummy, void *parm)
++static void virtio_balloon_unregister_shrinker(struct virtio_balloon *vb)
+ {
+-	struct virtio_balloon *vb = container_of(nb,
+-						 struct virtio_balloon, oom_nb);
+-	unsigned long *freed = parm;
++	unregister_shrinker(&vb->shrinker);
++}
+ 
+-	*freed += leak_balloon(vb, VIRTIO_BALLOON_OOM_NR_PAGES) /
+-		  VIRTIO_BALLOON_PAGES_PER_PAGE;
+-	update_balloon_size(vb);
++static int virtio_balloon_register_shrinker(struct virtio_balloon *vb)
++{
++	vb->shrinker.scan_objects = virtio_balloon_shrinker_scan;
++	vb->shrinker.count_objects = virtio_balloon_shrinker_count;
++	vb->shrinker.seeks = DEFAULT_SEEKS;
+ 
+-	return NOTIFY_OK;
++	return register_shrinker(&vb->shrinker);
+ }
+ 
+ static int virtballoon_probe(struct virtio_device *vdev)
+@@ -900,35 +935,22 @@ static int virtballoon_probe(struct virtio_device *vdev)
+ 			virtio_cwrite(vb->vdev, struct virtio_balloon_config,
+ 				      poison_val, &poison_val);
+ 		}
+-
+-		/*
+-		 * We're allowed to reuse any free pages, even if they are
+-		 * still to be processed by the host.
+-		 */
+-		vb->shrinker.scan_objects = virtio_balloon_shrinker_scan;
+-		vb->shrinker.count_objects = virtio_balloon_shrinker_count;
+-		vb->shrinker.seeks = DEFAULT_SEEKS;
+-		err = register_shrinker(&vb->shrinker);
++	}
++	/*
++	 * We continue to use VIRTIO_BALLOON_F_DEFLATE_ON_OOM to decide if a
++	 * shrinker needs to be registered to relieve memory pressure.
++	 */
++	if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_DEFLATE_ON_OOM)) {
++		err = virtio_balloon_register_shrinker(vb);
+ 		if (err)
+ 			goto out_del_balloon_wq;
+ 	}
+-	if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_DEFLATE_ON_OOM)) {
+-		vb->oom_nb.notifier_call = virtio_balloon_oom_notify;
+-		vb->oom_nb.priority = VIRTIO_BALLOON_OOM_NOTIFY_PRIORITY;
+-		err = register_oom_notifier(&vb->oom_nb);
+-		if (err < 0)
+-			goto out_unregister_shrinker;
+-	}
+-
+ 	virtio_device_ready(vdev);
+ 
+ 	if (towards_target(vb))
+ 		virtballoon_changed(vdev);
+ 	return 0;
+ 
+-out_unregister_shrinker:
+-	if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT))
+-		unregister_shrinker(&vb->shrinker);
+ out_del_balloon_wq:
+ 	if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT))
+ 		destroy_workqueue(vb->balloon_wq);
+@@ -967,11 +989,8 @@ static void virtballoon_remove(struct virtio_device *vdev)
+ {
+ 	struct virtio_balloon *vb = vdev->priv;
+ 
+-	if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_DEFLATE_ON_OOM))
+-		unregister_oom_notifier(&vb->oom_nb);
+-	if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT))
+-		unregister_shrinker(&vb->shrinker);
+-
++	if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_DEFLATE_ON_OOM))
++		virtio_balloon_unregister_shrinker(vb);
+ 	spin_lock_irq(&vb->stop_update_lock);
+ 	vb->stop_update = true;
+ 	spin_unlock_irq(&vb->stop_update_lock);
+diff --git a/fs/afs/fs_probe.c b/fs/afs/fs_probe.c
+index e1b9ed679045..02e976ca5732 100644
+--- a/fs/afs/fs_probe.c
++++ b/fs/afs/fs_probe.c
+@@ -32,9 +32,8 @@ void afs_fileserver_probe_result(struct afs_call *call)
+ 	struct afs_server *server = call->server;
+ 	unsigned int server_index = call->server_index;
+ 	unsigned int index = call->addr_ix;
+-	unsigned int rtt = UINT_MAX;
++	unsigned int rtt_us;
+ 	bool have_result = false;
+-	u64 _rtt;
+ 	int ret = call->error;
+ 
+ 	_enter("%pU,%u", &server->uuid, index);
+@@ -93,15 +92,9 @@ responded:
+ 		}
+ 	}
+ 
+-	/* Get the RTT and scale it to fit into a 32-bit value that represents
+-	 * over a minute of time so that we can access it with one instruction
+-	 * on a 32-bit system.
+-	 */
+-	_rtt = rxrpc_kernel_get_rtt(call->net->socket, call->rxcall);
+-	_rtt /= 64;
+-	rtt = (_rtt > UINT_MAX) ? UINT_MAX : _rtt;
+-	if (rtt < server->probe.rtt) {
+-		server->probe.rtt = rtt;
++	rtt_us = rxrpc_kernel_get_srtt(call->net->socket, call->rxcall);
++	if (rtt_us < server->probe.rtt) {
++		server->probe.rtt = rtt_us;
+ 		alist->preferred = index;
+ 		have_result = true;
+ 	}
+@@ -113,8 +106,7 @@ out:
+ 	spin_unlock(&server->probe_lock);
+ 
+ 	_debug("probe [%u][%u] %pISpc rtt=%u ret=%d",
+-	       server_index, index, &alist->addrs[index].transport,
+-	       (unsigned int)rtt, ret);
++	       server_index, index, &alist->addrs[index].transport, rtt_us, ret);
+ 
+ 	have_result |= afs_fs_probe_done(server);
+ 	if (have_result) {
+diff --git a/fs/afs/fsclient.c b/fs/afs/fsclient.c
+index 68fc46634346..d2b3798c1932 100644
+--- a/fs/afs/fsclient.c
++++ b/fs/afs/fsclient.c
+@@ -385,8 +385,6 @@ static int afs_deliver_fs_fetch_data(struct afs_call *call)
+ 		ASSERTCMP(req->offset, <=, PAGE_SIZE);
+ 		if (req->offset == PAGE_SIZE) {
+ 			req->offset = 0;
+-			if (req->page_done)
+-				req->page_done(req);
+ 			req->index++;
+ 			if (req->remain > 0)
+ 				goto begin_page;
+@@ -440,11 +438,13 @@ static int afs_deliver_fs_fetch_data(struct afs_call *call)
+ 		if (req->offset < PAGE_SIZE)
+ 			zero_user_segment(req->pages[req->index],
+ 					  req->offset, PAGE_SIZE);
+-		if (req->page_done)
+-			req->page_done(req);
+ 		req->offset = 0;
+ 	}
+ 
++	if (req->page_done)
++		for (req->index = 0; req->index < req->nr_pages; req->index++)
++			req->page_done(req);
++
+ 	_leave(" = 0 [done]");
+ 	return 0;
+ }
+diff --git a/fs/afs/vl_probe.c b/fs/afs/vl_probe.c
+index 858498cc1b05..e3aa013c2177 100644
+--- a/fs/afs/vl_probe.c
++++ b/fs/afs/vl_probe.c
+@@ -31,10 +31,9 @@ void afs_vlserver_probe_result(struct afs_call *call)
+ 	struct afs_addr_list *alist = call->alist;
+ 	struct afs_vlserver *server = call->vlserver;
+ 	unsigned int server_index = call->server_index;
++	unsigned int rtt_us = 0;
+ 	unsigned int index = call->addr_ix;
+-	unsigned int rtt = UINT_MAX;
+ 	bool have_result = false;
+-	u64 _rtt;
+ 	int ret = call->error;
+ 
+ 	_enter("%s,%u,%u,%d,%d", server->name, server_index, index, ret, call->abort_code);
+@@ -93,15 +92,9 @@ responded:
+ 		}
+ 	}
+ 
+-	/* Get the RTT and scale it to fit into a 32-bit value that represents
+-	 * over a minute of time so that we can access it with one instruction
+-	 * on a 32-bit system.
+-	 */
+-	_rtt = rxrpc_kernel_get_rtt(call->net->socket, call->rxcall);
+-	_rtt /= 64;
+-	rtt = (_rtt > UINT_MAX) ? UINT_MAX : _rtt;
+-	if (rtt < server->probe.rtt) {
+-		server->probe.rtt = rtt;
++	rtt_us = rxrpc_kernel_get_srtt(call->net->socket, call->rxcall);
++	if (rtt_us < server->probe.rtt) {
++		server->probe.rtt = rtt_us;
+ 		alist->preferred = index;
+ 		have_result = true;
+ 	}
+@@ -113,8 +106,7 @@ out:
+ 	spin_unlock(&server->probe_lock);
+ 
+ 	_debug("probe [%u][%u] %pISpc rtt=%u ret=%d",
+-	       server_index, index, &alist->addrs[index].transport,
+-	       (unsigned int)rtt, ret);
++	       server_index, index, &alist->addrs[index].transport, rtt_us, ret);
+ 
+ 	have_result |= afs_vl_probe_done(server);
+ 	if (have_result) {
+diff --git a/fs/afs/yfsclient.c b/fs/afs/yfsclient.c
+index b5b45c57e1b1..fe413e7a5cf4 100644
+--- a/fs/afs/yfsclient.c
++++ b/fs/afs/yfsclient.c
+@@ -497,8 +497,6 @@ static int yfs_deliver_fs_fetch_data64(struct afs_call *call)
+ 		ASSERTCMP(req->offset, <=, PAGE_SIZE);
+ 		if (req->offset == PAGE_SIZE) {
+ 			req->offset = 0;
+-			if (req->page_done)
+-				req->page_done(req);
+ 			req->index++;
+ 			if (req->remain > 0)
+ 				goto begin_page;
+@@ -556,11 +554,13 @@ static int yfs_deliver_fs_fetch_data64(struct afs_call *call)
+ 		if (req->offset < PAGE_SIZE)
+ 			zero_user_segment(req->pages[req->index],
+ 					  req->offset, PAGE_SIZE);
+-		if (req->page_done)
+-			req->page_done(req);
+ 		req->offset = 0;
+ 	}
+ 
++	if (req->page_done)
++		for (req->index = 0; req->index < req->nr_pages; req->index++)
++			req->page_done(req);
++
+ 	_leave(" = 0 [done]");
+ 	return 0;
+ }
+diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c
+index d050acc1fd5d..f50204380a65 100644
+--- a/fs/ceph/caps.c
++++ b/fs/ceph/caps.c
+@@ -3707,6 +3707,7 @@ retry:
+ 		WARN_ON(1);
+ 		tsession = NULL;
+ 		target = -1;
++		mutex_lock(&session->s_mutex);
+ 	}
+ 	goto retry;
+ 
+diff --git a/fs/configfs/dir.c b/fs/configfs/dir.c
+index cf7b7e1d5bd7..cb733652ecca 100644
+--- a/fs/configfs/dir.c
++++ b/fs/configfs/dir.c
+@@ -1519,6 +1519,7 @@ static int configfs_rmdir(struct inode *dir, struct dentry *dentry)
+ 		spin_lock(&configfs_dirent_lock);
+ 		configfs_detach_rollback(dentry);
+ 		spin_unlock(&configfs_dirent_lock);
++		config_item_put(parent_item);
+ 		return -EINTR;
+ 	}
+ 	frag->frag_dead = true;
+diff --git a/fs/file.c b/fs/file.c
+index c8a4e4c86e55..abb8b7081d7a 100644
+--- a/fs/file.c
++++ b/fs/file.c
+@@ -70,7 +70,7 @@ static void copy_fd_bitmaps(struct fdtable *nfdt, struct fdtable *ofdt,
+  */
+ static void copy_fdtable(struct fdtable *nfdt, struct fdtable *ofdt)
+ {
+-	unsigned int cpy, set;
++	size_t cpy, set;
+ 
+ 	BUG_ON(nfdt->max_fds < ofdt->max_fds);
+ 
+diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c
+index 19ebc6cd0f2b..d0eceaff3cea 100644
+--- a/fs/gfs2/glock.c
++++ b/fs/gfs2/glock.c
+@@ -645,9 +645,6 @@ __acquires(&gl->gl_lockref.lock)
+ 			goto out_unlock;
+ 		if (nonblock)
+ 			goto out_sched;
+-		smp_mb();
+-		if (atomic_read(&gl->gl_revokes) != 0)
+-			goto out_sched;
+ 		set_bit(GLF_DEMOTE_IN_PROGRESS, &gl->gl_flags);
+ 		GLOCK_BUG_ON(gl, gl->gl_demote_state == LM_ST_EXCLUSIVE);
+ 		gl->gl_target = gl->gl_demote_state;
+diff --git a/fs/overlayfs/export.c b/fs/overlayfs/export.c
+index 6f54d70cef27..e605017031ee 100644
+--- a/fs/overlayfs/export.c
++++ b/fs/overlayfs/export.c
+@@ -777,6 +777,9 @@ static struct ovl_fh *ovl_fid_to_fh(struct fid *fid, int buflen, int fh_type)
+ 	if (fh_type != OVL_FILEID_V0)
+ 		return ERR_PTR(-EINVAL);
+ 
++	if (buflen <= OVL_FH_WIRE_OFFSET)
++		return ERR_PTR(-EINVAL);
++
+ 	fh = kzalloc(buflen, GFP_KERNEL);
+ 	if (!fh)
+ 		return ERR_PTR(-ENOMEM);
+diff --git a/fs/splice.c b/fs/splice.c
+index d671936d0aad..39b11a9a6b98 100644
+--- a/fs/splice.c
++++ b/fs/splice.c
+@@ -1503,7 +1503,7 @@ static int opipe_prep(struct pipe_inode_info *pipe, unsigned int flags)
+ 	 * Check pipe occupancy without the inode lock first. This function
+ 	 * is speculative anyways, so missing one is ok.
+ 	 */
+-	if (pipe_full(pipe->head, pipe->tail, pipe->max_usage))
++	if (!pipe_full(pipe->head, pipe->tail, pipe->max_usage))
+ 		return 0;
+ 
+ 	ret = 0;
+diff --git a/fs/ubifs/auth.c b/fs/ubifs/auth.c
+index 8cdbd53d780c..f985a3fbbb36 100644
+--- a/fs/ubifs/auth.c
++++ b/fs/ubifs/auth.c
+@@ -79,13 +79,9 @@ int ubifs_prepare_auth_node(struct ubifs_info *c, void *node,
+ 			     struct shash_desc *inhash)
+ {
+ 	struct ubifs_auth_node *auth = node;
+-	u8 *hash;
++	u8 hash[UBIFS_HASH_ARR_SZ];
+ 	int err;
+ 
+-	hash = kmalloc(crypto_shash_descsize(c->hash_tfm), GFP_NOFS);
+-	if (!hash)
+-		return -ENOMEM;
+-
+ 	{
+ 		SHASH_DESC_ON_STACK(hash_desc, c->hash_tfm);
+ 
+@@ -94,21 +90,16 @@ int ubifs_prepare_auth_node(struct ubifs_info *c, void *node,
+ 
+ 		err = crypto_shash_final(hash_desc, hash);
+ 		if (err)
+-			goto out;
++			return err;
+ 	}
+ 
+ 	err = ubifs_hash_calc_hmac(c, hash, auth->hmac);
+ 	if (err)
+-		goto out;
++		return err;
+ 
+ 	auth->ch.node_type = UBIFS_AUTH_NODE;
+ 	ubifs_prepare_node(c, auth, ubifs_auth_node_sz(c), 0);
+-
+-	err = 0;
+-out:
+-	kfree(hash);
+-
+-	return err;
++	return 0;
+ }
+ 
+ static struct shash_desc *ubifs_get_desc(const struct ubifs_info *c,
+diff --git a/fs/ubifs/file.c b/fs/ubifs/file.c
+index 743928efffc1..49fe062ce45e 100644
+--- a/fs/ubifs/file.c
++++ b/fs/ubifs/file.c
+@@ -1375,7 +1375,6 @@ int ubifs_update_time(struct inode *inode, struct timespec64 *time,
+ 	struct ubifs_info *c = inode->i_sb->s_fs_info;
+ 	struct ubifs_budget_req req = { .dirtied_ino = 1,
+ 			.dirtied_ino_d = ALIGN(ui->data_len, 8) };
+-	int iflags = I_DIRTY_TIME;
+ 	int err, release;
+ 
+ 	if (!IS_ENABLED(CONFIG_UBIFS_ATIME_SUPPORT))
+@@ -1393,11 +1392,8 @@ int ubifs_update_time(struct inode *inode, struct timespec64 *time,
+ 	if (flags & S_MTIME)
+ 		inode->i_mtime = *time;
+ 
+-	if (!(inode->i_sb->s_flags & SB_LAZYTIME))
+-		iflags |= I_DIRTY_SYNC;
+-
+ 	release = ui->dirty;
+-	__mark_inode_dirty(inode, iflags);
++	__mark_inode_dirty(inode, I_DIRTY_SYNC);
+ 	mutex_unlock(&ui->ui_mutex);
+ 	if (release)
+ 		ubifs_release_budget(c, &req);
+diff --git a/fs/ubifs/replay.c b/fs/ubifs/replay.c
+index b28ac4dfb407..01fcf7975047 100644
+--- a/fs/ubifs/replay.c
++++ b/fs/ubifs/replay.c
+@@ -601,18 +601,12 @@ static int authenticate_sleb(struct ubifs_info *c, struct ubifs_scan_leb *sleb,
+ 	struct ubifs_scan_node *snod;
+ 	int n_nodes = 0;
+ 	int err;
+-	u8 *hash, *hmac;
++	u8 hash[UBIFS_HASH_ARR_SZ];
++	u8 hmac[UBIFS_HMAC_ARR_SZ];
+ 
+ 	if (!ubifs_authenticated(c))
+ 		return sleb->nodes_cnt;
+ 
+-	hash = kmalloc(crypto_shash_descsize(c->hash_tfm), GFP_NOFS);
+-	hmac = kmalloc(c->hmac_desc_len, GFP_NOFS);
+-	if (!hash || !hmac) {
+-		err = -ENOMEM;
+-		goto out;
+-	}
+-
+ 	list_for_each_entry(snod, &sleb->nodes, list) {
+ 
+ 		n_nodes++;
+@@ -662,9 +656,6 @@ static int authenticate_sleb(struct ubifs_info *c, struct ubifs_scan_leb *sleb,
+ 		err = 0;
+ 	}
+ out:
+-	kfree(hash);
+-	kfree(hmac);
+-
+ 	return err ? err : n_nodes - n_not_auth;
+ }
+ 
+diff --git a/include/linux/platform_device.h b/include/linux/platform_device.h
+index 81900b3cbe37..041bfa412aa0 100644
+--- a/include/linux/platform_device.h
++++ b/include/linux/platform_device.h
+@@ -25,7 +25,6 @@ struct platform_device {
+ 	bool		id_auto;
+ 	struct device	dev;
+ 	u64		platform_dma_mask;
+-	struct device_dma_parameters dma_parms;
+ 	u32		num_resources;
+ 	struct resource	*resource;
+ 
+diff --git a/include/net/af_rxrpc.h b/include/net/af_rxrpc.h
+index 04e97bab6f28..ab988940bf04 100644
+--- a/include/net/af_rxrpc.h
++++ b/include/net/af_rxrpc.h
+@@ -59,7 +59,7 @@ bool rxrpc_kernel_abort_call(struct socket *, struct rxrpc_call *,
+ void rxrpc_kernel_end_call(struct socket *, struct rxrpc_call *);
+ void rxrpc_kernel_get_peer(struct socket *, struct rxrpc_call *,
+ 			   struct sockaddr_rxrpc *);
+-u64 rxrpc_kernel_get_rtt(struct socket *, struct rxrpc_call *);
++u32 rxrpc_kernel_get_srtt(struct socket *, struct rxrpc_call *);
+ int rxrpc_kernel_charge_accept(struct socket *, rxrpc_notify_rx_t,
+ 			       rxrpc_user_attach_call_t, unsigned long, gfp_t,
+ 			       unsigned int);
+diff --git a/include/net/drop_monitor.h b/include/net/drop_monitor.h
+index 2ab668461463..f68bc373544a 100644
+--- a/include/net/drop_monitor.h
++++ b/include/net/drop_monitor.h
+@@ -19,7 +19,7 @@ struct net_dm_hw_metadata {
+ 	struct net_device *input_dev;
+ };
+ 
+-#if IS_ENABLED(CONFIG_NET_DROP_MONITOR)
++#if IS_REACHABLE(CONFIG_NET_DROP_MONITOR)
+ void net_dm_hw_report(struct sk_buff *skb,
+ 		      const struct net_dm_hw_metadata *hw_metadata);
+ #else
+diff --git a/include/trace/events/rxrpc.h b/include/trace/events/rxrpc.h
+index 191fe447f990..ba9efdc848f9 100644
+--- a/include/trace/events/rxrpc.h
++++ b/include/trace/events/rxrpc.h
+@@ -1112,18 +1112,17 @@ TRACE_EVENT(rxrpc_rtt_tx,
+ TRACE_EVENT(rxrpc_rtt_rx,
+ 	    TP_PROTO(struct rxrpc_call *call, enum rxrpc_rtt_rx_trace why,
+ 		     rxrpc_serial_t send_serial, rxrpc_serial_t resp_serial,
+-		     s64 rtt, u8 nr, s64 avg),
++		     u32 rtt, u32 rto),
+ 
+-	    TP_ARGS(call, why, send_serial, resp_serial, rtt, nr, avg),
++	    TP_ARGS(call, why, send_serial, resp_serial, rtt, rto),
+ 
+ 	    TP_STRUCT__entry(
+ 		    __field(unsigned int,		call		)
+ 		    __field(enum rxrpc_rtt_rx_trace,	why		)
+-		    __field(u8,				nr		)
+ 		    __field(rxrpc_serial_t,		send_serial	)
+ 		    __field(rxrpc_serial_t,		resp_serial	)
+-		    __field(s64,			rtt		)
+-		    __field(u64,			avg		)
++		    __field(u32,			rtt		)
++		    __field(u32,			rto		)
+ 			     ),
+ 
+ 	    TP_fast_assign(
+@@ -1132,18 +1131,16 @@ TRACE_EVENT(rxrpc_rtt_rx,
+ 		    __entry->send_serial = send_serial;
+ 		    __entry->resp_serial = resp_serial;
+ 		    __entry->rtt = rtt;
+-		    __entry->nr = nr;
+-		    __entry->avg = avg;
++		    __entry->rto = rto;
+ 			   ),
+ 
+-	    TP_printk("c=%08x %s sr=%08x rr=%08x rtt=%lld nr=%u avg=%lld",
++	    TP_printk("c=%08x %s sr=%08x rr=%08x rtt=%u rto=%u",
+ 		      __entry->call,
+ 		      __print_symbolic(__entry->why, rxrpc_rtt_rx_traces),
+ 		      __entry->send_serial,
+ 		      __entry->resp_serial,
+ 		      __entry->rtt,
+-		      __entry->nr,
+-		      __entry->avg)
++		      __entry->rto)
+ 	    );
+ 
+ TRACE_EVENT(rxrpc_timer,
+@@ -1544,6 +1541,41 @@ TRACE_EVENT(rxrpc_notify_socket,
+ 		      __entry->serial)
+ 	    );
+ 
++TRACE_EVENT(rxrpc_rx_discard_ack,
++	    TP_PROTO(unsigned int debug_id, rxrpc_serial_t serial,
++		     rxrpc_seq_t first_soft_ack, rxrpc_seq_t call_ackr_first,
++		     rxrpc_seq_t prev_pkt, rxrpc_seq_t call_ackr_prev),
++
++	    TP_ARGS(debug_id, serial, first_soft_ack, call_ackr_first,
++		    prev_pkt, call_ackr_prev),
++
++	    TP_STRUCT__entry(
++		    __field(unsigned int,	debug_id	)
++		    __field(rxrpc_serial_t,	serial		)
++		    __field(rxrpc_seq_t,	first_soft_ack)
++		    __field(rxrpc_seq_t,	call_ackr_first)
++		    __field(rxrpc_seq_t,	prev_pkt)
++		    __field(rxrpc_seq_t,	call_ackr_prev)
++			     ),
++
++	    TP_fast_assign(
++		    __entry->debug_id		= debug_id;
++		    __entry->serial		= serial;
++		    __entry->first_soft_ack	= first_soft_ack;
++		    __entry->call_ackr_first	= call_ackr_first;
++		    __entry->prev_pkt		= prev_pkt;
++		    __entry->call_ackr_prev	= call_ackr_prev;
++			   ),
++
++	    TP_printk("c=%08x r=%08x %08x<%08x %08x<%08x",
++		      __entry->debug_id,
++		      __entry->serial,
++		      __entry->first_soft_ack,
++		      __entry->call_ackr_first,
++		      __entry->prev_pkt,
++		      __entry->call_ackr_prev)
++	    );
++
+ #endif /* _TRACE_RXRPC_H */
+ 
+ /* This part must be outside protection */
+diff --git a/init/Kconfig b/init/Kconfig
+index ef59c5c36cdb..59908e87ece2 100644
+--- a/init/Kconfig
++++ b/init/Kconfig
+@@ -2223,6 +2223,9 @@ config ASN1
+ 
+ source "kernel/Kconfig.locks"
+ 
++config ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE
++	bool
++
+ config ARCH_HAS_SYNC_CORE_BEFORE_USERMODE
+ 	bool
+ 
+diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
+index e04ea4c8f935..c0ab9bfdf28a 100644
+--- a/kernel/bpf/syscall.c
++++ b/kernel/bpf/syscall.c
+@@ -629,9 +629,20 @@ static int bpf_map_mmap(struct file *filp, struct vm_area_struct *vma)
+ 
+ 	mutex_lock(&map->freeze_mutex);
+ 
+-	if ((vma->vm_flags & VM_WRITE) && map->frozen) {
+-		err = -EPERM;
+-		goto out;
++	if (vma->vm_flags & VM_WRITE) {
++		if (map->frozen) {
++			err = -EPERM;
++			goto out;
++		}
++		/* map is meant to be read-only, so do not allow mapping as
++		 * writable, because it's possible to leak a writable page
++		 * reference and allows user-space to still modify it after
++		 * freezing, while verifier will assume contents do not change
++		 */
++		if (map->map_flags & BPF_F_RDONLY_PROG) {
++			err = -EACCES;
++			goto out;
++		}
+ 	}
+ 
+ 	/* set default open/close callbacks */
+diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
+index c1bb5be530e9..775fca737909 100644
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -4113,7 +4113,9 @@ static int do_refine_retval_range(struct bpf_verifier_env *env,
+ 
+ 	if (ret_type != RET_INTEGER ||
+ 	    (func_id != BPF_FUNC_get_stack &&
+-	     func_id != BPF_FUNC_probe_read_str))
++	     func_id != BPF_FUNC_probe_read_str &&
++	     func_id != BPF_FUNC_probe_read_kernel_str &&
++	     func_id != BPF_FUNC_probe_read_user_str))
+ 		return 0;
+ 
+ 	/* Error case where ret is in interval [S32MIN, -1]. */
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index c76a20648b72..603d3d3cbf77 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -5276,32 +5276,38 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
+ 		cfs_rq = cfs_rq_of(se);
+ 		enqueue_entity(cfs_rq, se, flags);
+ 
+-		/*
+-		 * end evaluation on encountering a throttled cfs_rq
+-		 *
+-		 * note: in the case of encountering a throttled cfs_rq we will
+-		 * post the final h_nr_running increment below.
+-		 */
+-		if (cfs_rq_throttled(cfs_rq))
+-			break;
+ 		cfs_rq->h_nr_running++;
+ 		cfs_rq->idle_h_nr_running += idle_h_nr_running;
+ 
++		/* end evaluation on encountering a throttled cfs_rq */
++		if (cfs_rq_throttled(cfs_rq))
++			goto enqueue_throttle;
++
+ 		flags = ENQUEUE_WAKEUP;
+ 	}
+ 
+ 	for_each_sched_entity(se) {
+ 		cfs_rq = cfs_rq_of(se);
++
++		update_load_avg(cfs_rq, se, UPDATE_TG);
++		update_cfs_group(se);
++
+ 		cfs_rq->h_nr_running++;
+ 		cfs_rq->idle_h_nr_running += idle_h_nr_running;
+ 
++		/* end evaluation on encountering a throttled cfs_rq */
+ 		if (cfs_rq_throttled(cfs_rq))
+-			break;
++			goto enqueue_throttle;
+ 
+-		update_load_avg(cfs_rq, se, UPDATE_TG);
+-		update_cfs_group(se);
++               /*
++                * One parent has been throttled and cfs_rq removed from the
++                * list. Add it back to not break the leaf list.
++                */
++               if (throttled_hierarchy(cfs_rq))
++                       list_add_leaf_cfs_rq(cfs_rq);
+ 	}
+ 
++enqueue_throttle:
+ 	if (!se) {
+ 		add_nr_running(rq, 1);
+ 		/*
+@@ -5362,17 +5368,13 @@ static void dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags)
+ 		cfs_rq = cfs_rq_of(se);
+ 		dequeue_entity(cfs_rq, se, flags);
+ 
+-		/*
+-		 * end evaluation on encountering a throttled cfs_rq
+-		 *
+-		 * note: in the case of encountering a throttled cfs_rq we will
+-		 * post the final h_nr_running decrement below.
+-		*/
+-		if (cfs_rq_throttled(cfs_rq))
+-			break;
+ 		cfs_rq->h_nr_running--;
+ 		cfs_rq->idle_h_nr_running -= idle_h_nr_running;
+ 
++		/* end evaluation on encountering a throttled cfs_rq */
++		if (cfs_rq_throttled(cfs_rq))
++			goto dequeue_throttle;
++
+ 		/* Don't dequeue parent if it has other entities besides us */
+ 		if (cfs_rq->load.weight) {
+ 			/* Avoid re-evaluating load for this entity: */
+@@ -5390,16 +5392,20 @@ static void dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags)
+ 
+ 	for_each_sched_entity(se) {
+ 		cfs_rq = cfs_rq_of(se);
++
++		update_load_avg(cfs_rq, se, UPDATE_TG);
++		update_cfs_group(se);
++
+ 		cfs_rq->h_nr_running--;
+ 		cfs_rq->idle_h_nr_running -= idle_h_nr_running;
+ 
++		/* end evaluation on encountering a throttled cfs_rq */
+ 		if (cfs_rq_throttled(cfs_rq))
+-			break;
++			goto dequeue_throttle;
+ 
+-		update_load_avg(cfs_rq, se, UPDATE_TG);
+-		update_cfs_group(se);
+ 	}
+ 
++dequeue_throttle:
+ 	if (!se)
+ 		sub_nr_running(rq, 1);
+ 
+diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
+index b899a2d7e900..158233a2ab6c 100644
+--- a/kernel/trace/bpf_trace.c
++++ b/kernel/trace/bpf_trace.c
+@@ -857,14 +857,16 @@ tracing_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
+ 		return &bpf_probe_read_user_proto;
+ 	case BPF_FUNC_probe_read_kernel:
+ 		return &bpf_probe_read_kernel_proto;
+-	case BPF_FUNC_probe_read:
+-		return &bpf_probe_read_compat_proto;
+ 	case BPF_FUNC_probe_read_user_str:
+ 		return &bpf_probe_read_user_str_proto;
+ 	case BPF_FUNC_probe_read_kernel_str:
+ 		return &bpf_probe_read_kernel_str_proto;
++#ifdef CONFIG_ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE
++	case BPF_FUNC_probe_read:
++		return &bpf_probe_read_compat_proto;
+ 	case BPF_FUNC_probe_read_str:
+ 		return &bpf_probe_read_compat_str_proto;
++#endif
+ #ifdef CONFIG_CGROUPS
+ 	case BPF_FUNC_get_current_cgroup_id:
+ 		return &bpf_get_current_cgroup_id_proto;
+diff --git a/lib/test_printf.c b/lib/test_printf.c
+index 2d9f520d2f27..6b1622f4d7c2 100644
+--- a/lib/test_printf.c
++++ b/lib/test_printf.c
+@@ -214,6 +214,7 @@ test_string(void)
+ #define PTR_STR "ffff0123456789ab"
+ #define PTR_VAL_NO_CRNG "(____ptrval____)"
+ #define ZEROS "00000000"	/* hex 32 zero bits */
++#define ONES "ffffffff"		/* hex 32 one bits */
+ 
+ static int __init
+ plain_format(void)
+@@ -245,6 +246,7 @@ plain_format(void)
+ #define PTR_STR "456789ab"
+ #define PTR_VAL_NO_CRNG "(ptrval)"
+ #define ZEROS ""
++#define ONES ""
+ 
+ static int __init
+ plain_format(void)
+@@ -330,14 +332,28 @@ test_hashed(const char *fmt, const void *p)
+ 	test(buf, fmt, p);
+ }
+ 
++/*
++ * NULL pointers aren't hashed.
++ */
+ static void __init
+ null_pointer(void)
+ {
+-	test_hashed("%p", NULL);
++	test(ZEROS "00000000", "%p", NULL);
+ 	test(ZEROS "00000000", "%px", NULL);
+ 	test("(null)", "%pE", NULL);
+ }
+ 
++/*
++ * Error pointers aren't hashed.
++ */
++static void __init
++error_pointer(void)
++{
++	test(ONES "fffffff5", "%p", ERR_PTR(-11));
++	test(ONES "fffffff5", "%px", ERR_PTR(-11));
++	test("(efault)", "%pE", ERR_PTR(-11));
++}
++
+ #define PTR_INVALID ((void *)0x000000ab)
+ 
+ static void __init
+@@ -649,6 +665,7 @@ test_pointer(void)
+ {
+ 	plain();
+ 	null_pointer();
++	error_pointer();
+ 	invalid_pointer();
+ 	symbol_ptr();
+ 	kernel_ptr();
+diff --git a/lib/vsprintf.c b/lib/vsprintf.c
+index 532b6606a18a..7c47ad52ce2f 100644
+--- a/lib/vsprintf.c
++++ b/lib/vsprintf.c
+@@ -794,6 +794,13 @@ static char *ptr_to_id(char *buf, char *end, const void *ptr,
+ 	unsigned long hashval;
+ 	int ret;
+ 
++	/*
++	 * Print the real pointer value for NULL and error pointers,
++	 * as they are not actual addresses.
++	 */
++	if (IS_ERR_OR_NULL(ptr))
++		return pointer_string(buf, end, ptr, spec);
++
+ 	/* When debugging early boot use non-cryptographically secure hash. */
+ 	if (unlikely(debug_boot_weak_hash)) {
+ 		hashval = hash_long((unsigned long)ptr, 32);
+diff --git a/mm/kasan/Makefile b/mm/kasan/Makefile
+index 08b43de2383b..f36ffc090f5f 100644
+--- a/mm/kasan/Makefile
++++ b/mm/kasan/Makefile
+@@ -14,10 +14,10 @@ CFLAGS_REMOVE_tags.o = $(CC_FLAGS_FTRACE)
+ # Function splitter causes unnecessary splits in __asan_load1/__asan_store1
+ # see: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=63533
+ 
+-CFLAGS_common.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector)
+-CFLAGS_generic.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector)
+-CFLAGS_generic_report.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector)
+-CFLAGS_tags.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector)
++CFLAGS_common.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector) -DDISABLE_BRANCH_PROFILING
++CFLAGS_generic.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector) -DDISABLE_BRANCH_PROFILING
++CFLAGS_generic_report.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector) -DDISABLE_BRANCH_PROFILING
++CFLAGS_tags.o := $(call cc-option, -fno-conserve-stack -fno-stack-protector) -DDISABLE_BRANCH_PROFILING
+ 
+ obj-$(CONFIG_KASAN) := common.o init.o report.o
+ obj-$(CONFIG_KASAN_GENERIC) += generic.o generic_report.o quarantine.o
+diff --git a/mm/kasan/generic.c b/mm/kasan/generic.c
+index 616f9dd82d12..76a80033e0b7 100644
+--- a/mm/kasan/generic.c
++++ b/mm/kasan/generic.c
+@@ -15,7 +15,6 @@
+  */
+ 
+ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+-#define DISABLE_BRANCH_PROFILING
+ 
+ #include <linux/export.h>
+ #include <linux/interrupt.h>
+diff --git a/mm/kasan/tags.c b/mm/kasan/tags.c
+index 0e987c9ca052..caf4efd9888c 100644
+--- a/mm/kasan/tags.c
++++ b/mm/kasan/tags.c
+@@ -12,7 +12,6 @@
+  */
+ 
+ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+-#define DISABLE_BRANCH_PROFILING
+ 
+ #include <linux/export.h>
+ #include <linux/interrupt.h>
+diff --git a/mm/z3fold.c b/mm/z3fold.c
+index 42f31c4b53ad..8c3bb5e508b8 100644
+--- a/mm/z3fold.c
++++ b/mm/z3fold.c
+@@ -318,16 +318,16 @@ static inline void free_handle(unsigned long handle)
+ 	slots = handle_to_slots(handle);
+ 	write_lock(&slots->lock);
+ 	*(unsigned long *)handle = 0;
+-	write_unlock(&slots->lock);
+-	if (zhdr->slots == slots)
++	if (zhdr->slots == slots) {
++		write_unlock(&slots->lock);
+ 		return; /* simple case, nothing else to do */
++	}
+ 
+ 	/* we are freeing a foreign handle if we are here */
+ 	zhdr->foreign_handles--;
+ 	is_free = true;
+-	read_lock(&slots->lock);
+ 	if (!test_bit(HANDLES_ORPHANED, &slots->pool)) {
+-		read_unlock(&slots->lock);
++		write_unlock(&slots->lock);
+ 		return;
+ 	}
+ 	for (i = 0; i <= BUDDY_MASK; i++) {
+@@ -336,7 +336,7 @@ static inline void free_handle(unsigned long handle)
+ 			break;
+ 		}
+ 	}
+-	read_unlock(&slots->lock);
++	write_unlock(&slots->lock);
+ 
+ 	if (is_free) {
+ 		struct z3fold_pool *pool = slots_to_pool(slots);
+@@ -422,6 +422,7 @@ static struct z3fold_header *init_z3fold_page(struct page *page, bool headless,
+ 	zhdr->start_middle = 0;
+ 	zhdr->cpu = -1;
+ 	zhdr->foreign_handles = 0;
++	zhdr->mapped_count = 0;
+ 	zhdr->slots = slots;
+ 	zhdr->pool = pool;
+ 	INIT_LIST_HEAD(&zhdr->buddy);
+diff --git a/net/core/flow_dissector.c b/net/core/flow_dissector.c
+index a1670dff0629..0e5012d7b7b5 100644
+--- a/net/core/flow_dissector.c
++++ b/net/core/flow_dissector.c
+@@ -160,12 +160,10 @@ out:
+ 	return ret;
+ }
+ 
+-int skb_flow_dissector_bpf_prog_detach(const union bpf_attr *attr)
++static int flow_dissector_bpf_prog_detach(struct net *net)
+ {
+ 	struct bpf_prog *attached;
+-	struct net *net;
+ 
+-	net = current->nsproxy->net_ns;
+ 	mutex_lock(&flow_dissector_mutex);
+ 	attached = rcu_dereference_protected(net->flow_dissector_prog,
+ 					     lockdep_is_held(&flow_dissector_mutex));
+@@ -179,6 +177,24 @@ int skb_flow_dissector_bpf_prog_detach(const union bpf_attr *attr)
+ 	return 0;
+ }
+ 
++int skb_flow_dissector_bpf_prog_detach(const union bpf_attr *attr)
++{
++	return flow_dissector_bpf_prog_detach(current->nsproxy->net_ns);
++}
++
++static void __net_exit flow_dissector_pernet_pre_exit(struct net *net)
++{
++	/* We're not racing with attach/detach because there are no
++	 * references to netns left when pre_exit gets called.
++	 */
++	if (rcu_access_pointer(net->flow_dissector_prog))
++		flow_dissector_bpf_prog_detach(net);
++}
++
++static struct pernet_operations flow_dissector_pernet_ops __net_initdata = {
++	.pre_exit = flow_dissector_pernet_pre_exit,
++};
++
+ /**
+  * __skb_flow_get_ports - extract the upper layer ports and return them
+  * @skb: sk_buff to extract the ports from
+@@ -1838,7 +1854,7 @@ static int __init init_default_flow_dissectors(void)
+ 	skb_flow_dissector_init(&flow_keys_basic_dissector,
+ 				flow_keys_basic_dissector_keys,
+ 				ARRAY_SIZE(flow_keys_basic_dissector_keys));
+-	return 0;
+-}
+ 
++	return register_pernet_subsys(&flow_dissector_pernet_ops);
++}
+ core_initcall(init_default_flow_dissectors);
+diff --git a/net/rxrpc/Makefile b/net/rxrpc/Makefile
+index 6ffb7e9887ce..ddd0f95713a9 100644
+--- a/net/rxrpc/Makefile
++++ b/net/rxrpc/Makefile
+@@ -25,6 +25,7 @@ rxrpc-y := \
+ 	peer_event.o \
+ 	peer_object.o \
+ 	recvmsg.o \
++	rtt.o \
+ 	security.o \
+ 	sendmsg.o \
+ 	skbuff.o \
+diff --git a/net/rxrpc/ar-internal.h b/net/rxrpc/ar-internal.h
+index 3eb1ab40ca5c..9fe264bec70c 100644
+--- a/net/rxrpc/ar-internal.h
++++ b/net/rxrpc/ar-internal.h
+@@ -7,6 +7,7 @@
+ 
+ #include <linux/atomic.h>
+ #include <linux/seqlock.h>
++#include <linux/win_minmax.h>
+ #include <net/net_namespace.h>
+ #include <net/netns/generic.h>
+ #include <net/sock.h>
+@@ -311,11 +312,14 @@ struct rxrpc_peer {
+ #define RXRPC_RTT_CACHE_SIZE 32
+ 	spinlock_t		rtt_input_lock;	/* RTT lock for input routine */
+ 	ktime_t			rtt_last_req;	/* Time of last RTT request */
+-	u64			rtt;		/* Current RTT estimate (in nS) */
+-	u64			rtt_sum;	/* Sum of cache contents */
+-	u64			rtt_cache[RXRPC_RTT_CACHE_SIZE]; /* Determined RTT cache */
+-	u8			rtt_cursor;	/* next entry at which to insert */
+-	u8			rtt_usage;	/* amount of cache actually used */
++	unsigned int		rtt_count;	/* Number of samples we've got */
++
++	u32			srtt_us;	/* smoothed round trip time << 3 in usecs */
++	u32			mdev_us;	/* medium deviation			*/
++	u32			mdev_max_us;	/* maximal mdev for the last rtt period	*/
++	u32			rttvar_us;	/* smoothed mdev_max			*/
++	u32			rto_j;		/* Retransmission timeout in jiffies */
++	u8			backoff;	/* Backoff timeout */
+ 
+ 	u8			cong_cwnd;	/* Congestion window size */
+ };
+@@ -1041,7 +1045,6 @@ extern unsigned long rxrpc_idle_ack_delay;
+ extern unsigned int rxrpc_rx_window_size;
+ extern unsigned int rxrpc_rx_mtu;
+ extern unsigned int rxrpc_rx_jumbo_max;
+-extern unsigned long rxrpc_resend_timeout;
+ 
+ extern const s8 rxrpc_ack_priority[];
+ 
+@@ -1069,8 +1072,6 @@ void rxrpc_send_keepalive(struct rxrpc_peer *);
+  * peer_event.c
+  */
+ void rxrpc_error_report(struct sock *);
+-void rxrpc_peer_add_rtt(struct rxrpc_call *, enum rxrpc_rtt_rx_trace,
+-			rxrpc_serial_t, rxrpc_serial_t, ktime_t, ktime_t);
+ void rxrpc_peer_keepalive_worker(struct work_struct *);
+ 
+ /*
+@@ -1102,6 +1103,14 @@ extern const struct seq_operations rxrpc_peer_seq_ops;
+ void rxrpc_notify_socket(struct rxrpc_call *);
+ int rxrpc_recvmsg(struct socket *, struct msghdr *, size_t, int);
+ 
++/*
++ * rtt.c
++ */
++void rxrpc_peer_add_rtt(struct rxrpc_call *, enum rxrpc_rtt_rx_trace,
++			rxrpc_serial_t, rxrpc_serial_t, ktime_t, ktime_t);
++unsigned long rxrpc_get_rto_backoff(struct rxrpc_peer *, bool);
++void rxrpc_peer_init_rtt(struct rxrpc_peer *);
++
+ /*
+  * rxkad.c
+  */
+diff --git a/net/rxrpc/call_accept.c b/net/rxrpc/call_accept.c
+index 70e44abf106c..b7611cc159e5 100644
+--- a/net/rxrpc/call_accept.c
++++ b/net/rxrpc/call_accept.c
+@@ -248,7 +248,7 @@ static void rxrpc_send_ping(struct rxrpc_call *call, struct sk_buff *skb)
+ 	struct rxrpc_skb_priv *sp = rxrpc_skb(skb);
+ 	ktime_t now = skb->tstamp;
+ 
+-	if (call->peer->rtt_usage < 3 ||
++	if (call->peer->rtt_count < 3 ||
+ 	    ktime_before(ktime_add_ms(call->peer->rtt_last_req, 1000), now))
+ 		rxrpc_propose_ACK(call, RXRPC_ACK_PING, sp->hdr.serial,
+ 				  true, true,
+diff --git a/net/rxrpc/call_event.c b/net/rxrpc/call_event.c
+index cedbbb3a7c2e..2a65ac41055f 100644
+--- a/net/rxrpc/call_event.c
++++ b/net/rxrpc/call_event.c
+@@ -111,8 +111,8 @@ static void __rxrpc_propose_ACK(struct rxrpc_call *call, u8 ack_reason,
+ 	} else {
+ 		unsigned long now = jiffies, ack_at;
+ 
+-		if (call->peer->rtt_usage > 0)
+-			ack_at = nsecs_to_jiffies(call->peer->rtt);
++		if (call->peer->srtt_us != 0)
++			ack_at = usecs_to_jiffies(call->peer->srtt_us >> 3);
+ 		else
+ 			ack_at = expiry;
+ 
+@@ -157,24 +157,18 @@ static void rxrpc_congestion_timeout(struct rxrpc_call *call)
+ static void rxrpc_resend(struct rxrpc_call *call, unsigned long now_j)
+ {
+ 	struct sk_buff *skb;
+-	unsigned long resend_at;
++	unsigned long resend_at, rto_j;
+ 	rxrpc_seq_t cursor, seq, top;
+-	ktime_t now, max_age, oldest, ack_ts, timeout, min_timeo;
++	ktime_t now, max_age, oldest, ack_ts;
+ 	int ix;
+ 	u8 annotation, anno_type, retrans = 0, unacked = 0;
+ 
+ 	_enter("{%d,%d}", call->tx_hard_ack, call->tx_top);
+ 
+-	if (call->peer->rtt_usage > 1)
+-		timeout = ns_to_ktime(call->peer->rtt * 3 / 2);
+-	else
+-		timeout = ms_to_ktime(rxrpc_resend_timeout);
+-	min_timeo = ns_to_ktime((1000000000 / HZ) * 4);
+-	if (ktime_before(timeout, min_timeo))
+-		timeout = min_timeo;
++	rto_j = call->peer->rto_j;
+ 
+ 	now = ktime_get_real();
+-	max_age = ktime_sub(now, timeout);
++	max_age = ktime_sub(now, jiffies_to_usecs(rto_j));
+ 
+ 	spin_lock_bh(&call->lock);
+ 
+@@ -219,7 +213,7 @@ static void rxrpc_resend(struct rxrpc_call *call, unsigned long now_j)
+ 	}
+ 
+ 	resend_at = nsecs_to_jiffies(ktime_to_ns(ktime_sub(now, oldest)));
+-	resend_at += jiffies + rxrpc_resend_timeout;
++	resend_at += jiffies + rto_j;
+ 	WRITE_ONCE(call->resend_at, resend_at);
+ 
+ 	if (unacked)
+@@ -234,7 +228,7 @@ static void rxrpc_resend(struct rxrpc_call *call, unsigned long now_j)
+ 					rxrpc_timer_set_for_resend);
+ 		spin_unlock_bh(&call->lock);
+ 		ack_ts = ktime_sub(now, call->acks_latest_ts);
+-		if (ktime_to_ns(ack_ts) < call->peer->rtt)
++		if (ktime_to_us(ack_ts) < (call->peer->srtt_us >> 3))
+ 			goto out;
+ 		rxrpc_propose_ACK(call, RXRPC_ACK_PING, 0, true, false,
+ 				  rxrpc_propose_ack_ping_for_lost_ack);
+diff --git a/net/rxrpc/input.c b/net/rxrpc/input.c
+index 69e09d69c896..3be4177baf70 100644
+--- a/net/rxrpc/input.c
++++ b/net/rxrpc/input.c
+@@ -91,11 +91,11 @@ static void rxrpc_congestion_management(struct rxrpc_call *call,
+ 		/* We analyse the number of packets that get ACK'd per RTT
+ 		 * period and increase the window if we managed to fill it.
+ 		 */
+-		if (call->peer->rtt_usage == 0)
++		if (call->peer->rtt_count == 0)
+ 			goto out;
+ 		if (ktime_before(skb->tstamp,
+-				 ktime_add_ns(call->cong_tstamp,
+-					      call->peer->rtt)))
++				 ktime_add_us(call->cong_tstamp,
++					      call->peer->srtt_us >> 3)))
+ 			goto out_no_clear_ca;
+ 		change = rxrpc_cong_rtt_window_end;
+ 		call->cong_tstamp = skb->tstamp;
+@@ -802,6 +802,30 @@ static void rxrpc_input_soft_acks(struct rxrpc_call *call, u8 *acks,
+ 	}
+ }
+ 
++/*
++ * Return true if the ACK is valid - ie. it doesn't appear to have regressed
++ * with respect to the ack state conveyed by preceding ACKs.
++ */
++static bool rxrpc_is_ack_valid(struct rxrpc_call *call,
++			       rxrpc_seq_t first_pkt, rxrpc_seq_t prev_pkt)
++{
++	rxrpc_seq_t base = READ_ONCE(call->ackr_first_seq);
++
++	if (after(first_pkt, base))
++		return true; /* The window advanced */
++
++	if (before(first_pkt, base))
++		return false; /* firstPacket regressed */
++
++	if (after_eq(prev_pkt, call->ackr_prev_seq))
++		return true; /* previousPacket hasn't regressed. */
++
++	/* Some rx implementations put a serial number in previousPacket. */
++	if (after_eq(prev_pkt, base + call->tx_winsize))
++		return false;
++	return true;
++}
++
+ /*
+  * Process an ACK packet.
+  *
+@@ -865,9 +889,12 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb)
+ 	}
+ 
+ 	/* Discard any out-of-order or duplicate ACKs (outside lock). */
+-	if (before(first_soft_ack, call->ackr_first_seq) ||
+-	    before(prev_pkt, call->ackr_prev_seq))
++	if (!rxrpc_is_ack_valid(call, first_soft_ack, prev_pkt)) {
++		trace_rxrpc_rx_discard_ack(call->debug_id, sp->hdr.serial,
++					   first_soft_ack, call->ackr_first_seq,
++					   prev_pkt, call->ackr_prev_seq);
+ 		return;
++	}
+ 
+ 	buf.info.rxMTU = 0;
+ 	ioffset = offset + nr_acks + 3;
+@@ -878,9 +905,12 @@ static void rxrpc_input_ack(struct rxrpc_call *call, struct sk_buff *skb)
+ 	spin_lock(&call->input_lock);
+ 
+ 	/* Discard any out-of-order or duplicate ACKs (inside lock). */
+-	if (before(first_soft_ack, call->ackr_first_seq) ||
+-	    before(prev_pkt, call->ackr_prev_seq))
++	if (!rxrpc_is_ack_valid(call, first_soft_ack, prev_pkt)) {
++		trace_rxrpc_rx_discard_ack(call->debug_id, sp->hdr.serial,
++					   first_soft_ack, call->ackr_first_seq,
++					   prev_pkt, call->ackr_prev_seq);
+ 		goto out;
++	}
+ 	call->acks_latest_ts = skb->tstamp;
+ 
+ 	call->ackr_first_seq = first_soft_ack;
+diff --git a/net/rxrpc/misc.c b/net/rxrpc/misc.c
+index 214405f75346..d4144fd86f84 100644
+--- a/net/rxrpc/misc.c
++++ b/net/rxrpc/misc.c
+@@ -63,11 +63,6 @@ unsigned int rxrpc_rx_mtu = 5692;
+  */
+ unsigned int rxrpc_rx_jumbo_max = 4;
+ 
+-/*
+- * Time till packet resend (in milliseconds).
+- */
+-unsigned long rxrpc_resend_timeout = 4 * HZ;
+-
+ const s8 rxrpc_ack_priority[] = {
+ 	[0]				= 0,
+ 	[RXRPC_ACK_DELAY]		= 1,
+diff --git a/net/rxrpc/output.c b/net/rxrpc/output.c
+index 90e263c6aa69..f8b632a5c619 100644
+--- a/net/rxrpc/output.c
++++ b/net/rxrpc/output.c
+@@ -369,7 +369,7 @@ int rxrpc_send_data_packet(struct rxrpc_call *call, struct sk_buff *skb,
+ 	    (test_and_clear_bit(RXRPC_CALL_EV_ACK_LOST, &call->events) ||
+ 	     retrans ||
+ 	     call->cong_mode == RXRPC_CALL_SLOW_START ||
+-	     (call->peer->rtt_usage < 3 && sp->hdr.seq & 1) ||
++	     (call->peer->rtt_count < 3 && sp->hdr.seq & 1) ||
+ 	     ktime_before(ktime_add_ms(call->peer->rtt_last_req, 1000),
+ 			  ktime_get_real())))
+ 		whdr.flags |= RXRPC_REQUEST_ACK;
+@@ -423,13 +423,10 @@ done:
+ 		if (whdr.flags & RXRPC_REQUEST_ACK) {
+ 			call->peer->rtt_last_req = skb->tstamp;
+ 			trace_rxrpc_rtt_tx(call, rxrpc_rtt_tx_data, serial);
+-			if (call->peer->rtt_usage > 1) {
++			if (call->peer->rtt_count > 1) {
+ 				unsigned long nowj = jiffies, ack_lost_at;
+ 
+-				ack_lost_at = nsecs_to_jiffies(2 * call->peer->rtt);
+-				if (ack_lost_at < 1)
+-					ack_lost_at = 1;
+-
++				ack_lost_at = rxrpc_get_rto_backoff(call->peer, retrans);
+ 				ack_lost_at += nowj;
+ 				WRITE_ONCE(call->ack_lost_at, ack_lost_at);
+ 				rxrpc_reduce_call_timer(call, ack_lost_at, nowj,
+diff --git a/net/rxrpc/peer_event.c b/net/rxrpc/peer_event.c
+index 923b263c401b..b1449d971883 100644
+--- a/net/rxrpc/peer_event.c
++++ b/net/rxrpc/peer_event.c
+@@ -295,52 +295,6 @@ static void rxrpc_distribute_error(struct rxrpc_peer *peer, int error,
+ 	}
+ }
+ 
+-/*
+- * Add RTT information to cache.  This is called in softirq mode and has
+- * exclusive access to the peer RTT data.
+- */
+-void rxrpc_peer_add_rtt(struct rxrpc_call *call, enum rxrpc_rtt_rx_trace why,
+-			rxrpc_serial_t send_serial, rxrpc_serial_t resp_serial,
+-			ktime_t send_time, ktime_t resp_time)
+-{
+-	struct rxrpc_peer *peer = call->peer;
+-	s64 rtt;
+-	u64 sum = peer->rtt_sum, avg;
+-	u8 cursor = peer->rtt_cursor, usage = peer->rtt_usage;
+-
+-	rtt = ktime_to_ns(ktime_sub(resp_time, send_time));
+-	if (rtt < 0)
+-		return;
+-
+-	spin_lock(&peer->rtt_input_lock);
+-
+-	/* Replace the oldest datum in the RTT buffer */
+-	sum -= peer->rtt_cache[cursor];
+-	sum += rtt;
+-	peer->rtt_cache[cursor] = rtt;
+-	peer->rtt_cursor = (cursor + 1) & (RXRPC_RTT_CACHE_SIZE - 1);
+-	peer->rtt_sum = sum;
+-	if (usage < RXRPC_RTT_CACHE_SIZE) {
+-		usage++;
+-		peer->rtt_usage = usage;
+-	}
+-
+-	spin_unlock(&peer->rtt_input_lock);
+-
+-	/* Now recalculate the average */
+-	if (usage == RXRPC_RTT_CACHE_SIZE) {
+-		avg = sum / RXRPC_RTT_CACHE_SIZE;
+-	} else {
+-		avg = sum;
+-		do_div(avg, usage);
+-	}
+-
+-	/* Don't need to update this under lock */
+-	peer->rtt = avg;
+-	trace_rxrpc_rtt_rx(call, why, send_serial, resp_serial, rtt,
+-			   usage, avg);
+-}
+-
+ /*
+  * Perform keep-alive pings.
+  */
+diff --git a/net/rxrpc/peer_object.c b/net/rxrpc/peer_object.c
+index 452163eadb98..ca29976bb193 100644
+--- a/net/rxrpc/peer_object.c
++++ b/net/rxrpc/peer_object.c
+@@ -225,6 +225,8 @@ struct rxrpc_peer *rxrpc_alloc_peer(struct rxrpc_local *local, gfp_t gfp)
+ 		spin_lock_init(&peer->rtt_input_lock);
+ 		peer->debug_id = atomic_inc_return(&rxrpc_debug_id);
+ 
++		rxrpc_peer_init_rtt(peer);
++
+ 		if (RXRPC_TX_SMSS > 2190)
+ 			peer->cong_cwnd = 2;
+ 		else if (RXRPC_TX_SMSS > 1095)
+@@ -497,14 +499,14 @@ void rxrpc_kernel_get_peer(struct socket *sock, struct rxrpc_call *call,
+ EXPORT_SYMBOL(rxrpc_kernel_get_peer);
+ 
+ /**
+- * rxrpc_kernel_get_rtt - Get a call's peer RTT
++ * rxrpc_kernel_get_srtt - Get a call's peer smoothed RTT
+  * @sock: The socket on which the call is in progress.
+  * @call: The call to query
+  *
+- * Get the call's peer RTT.
++ * Get the call's peer smoothed RTT.
+  */
+-u64 rxrpc_kernel_get_rtt(struct socket *sock, struct rxrpc_call *call)
++u32 rxrpc_kernel_get_srtt(struct socket *sock, struct rxrpc_call *call)
+ {
+-	return call->peer->rtt;
++	return call->peer->srtt_us >> 3;
+ }
+-EXPORT_SYMBOL(rxrpc_kernel_get_rtt);
++EXPORT_SYMBOL(rxrpc_kernel_get_srtt);
+diff --git a/net/rxrpc/proc.c b/net/rxrpc/proc.c
+index b9d053e42821..8b179e3c802a 100644
+--- a/net/rxrpc/proc.c
++++ b/net/rxrpc/proc.c
+@@ -222,7 +222,7 @@ static int rxrpc_peer_seq_show(struct seq_file *seq, void *v)
+ 		seq_puts(seq,
+ 			 "Proto Local                                          "
+ 			 " Remote                                         "
+-			 " Use CW  MTU   LastUse          RTT Rc\n"
++			 " Use  CW   MTU LastUse      RTT      RTO\n"
+ 			 );
+ 		return 0;
+ 	}
+@@ -236,15 +236,15 @@ static int rxrpc_peer_seq_show(struct seq_file *seq, void *v)
+ 	now = ktime_get_seconds();
+ 	seq_printf(seq,
+ 		   "UDP   %-47.47s %-47.47s %3u"
+-		   " %3u %5u %6llus %12llu %2u\n",
++		   " %3u %5u %6llus %8u %8u\n",
+ 		   lbuff,
+ 		   rbuff,
+ 		   atomic_read(&peer->usage),
+ 		   peer->cong_cwnd,
+ 		   peer->mtu,
+ 		   now - peer->last_tx_at,
+-		   peer->rtt,
+-		   peer->rtt_cursor);
++		   peer->srtt_us >> 3,
++		   jiffies_to_usecs(peer->rto_j));
+ 
+ 	return 0;
+ }
+diff --git a/net/rxrpc/rtt.c b/net/rxrpc/rtt.c
+new file mode 100644
+index 000000000000..928d8b34a3ee
+--- /dev/null
++++ b/net/rxrpc/rtt.c
+@@ -0,0 +1,195 @@
++// SPDX-License-Identifier: GPL-2.0
++/* RTT/RTO calculation.
++ *
++ * Adapted from TCP for AF_RXRPC by David Howells (dhowells@redhat.com)
++ *
++ * https://tools.ietf.org/html/rfc6298
++ * https://tools.ietf.org/html/rfc1122#section-4.2.3.1
++ * http://ccr.sigcomm.org/archive/1995/jan95/ccr-9501-partridge87.pdf
++ */
++
++#include <linux/net.h>
++#include "ar-internal.h"
++
++#define RXRPC_RTO_MAX	((unsigned)(120 * HZ))
++#define RXRPC_TIMEOUT_INIT ((unsigned)(1*HZ))	/* RFC6298 2.1 initial RTO value	*/
++#define rxrpc_jiffies32 ((u32)jiffies)		/* As rxrpc_jiffies32 */
++#define rxrpc_min_rtt_wlen 300			/* As sysctl_tcp_min_rtt_wlen */
++
++static u32 rxrpc_rto_min_us(struct rxrpc_peer *peer)
++{
++	return 200;
++}
++
++static u32 __rxrpc_set_rto(const struct rxrpc_peer *peer)
++{
++	return _usecs_to_jiffies((peer->srtt_us >> 3) + peer->rttvar_us);
++}
++
++static u32 rxrpc_bound_rto(u32 rto)
++{
++	return min(rto, RXRPC_RTO_MAX);
++}
++
++/*
++ * Called to compute a smoothed rtt estimate. The data fed to this
++ * routine either comes from timestamps, or from segments that were
++ * known _not_ to have been retransmitted [see Karn/Partridge
++ * Proceedings SIGCOMM 87]. The algorithm is from the SIGCOMM 88
++ * piece by Van Jacobson.
++ * NOTE: the next three routines used to be one big routine.
++ * To save cycles in the RFC 1323 implementation it was better to break
++ * it up into three procedures. -- erics
++ */
++static void rxrpc_rtt_estimator(struct rxrpc_peer *peer, long sample_rtt_us)
++{
++	long m = sample_rtt_us; /* RTT */
++	u32 srtt = peer->srtt_us;
++
++	/*	The following amusing code comes from Jacobson's
++	 *	article in SIGCOMM '88.  Note that rtt and mdev
++	 *	are scaled versions of rtt and mean deviation.
++	 *	This is designed to be as fast as possible
++	 *	m stands for "measurement".
++	 *
++	 *	On a 1990 paper the rto value is changed to:
++	 *	RTO = rtt + 4 * mdev
++	 *
++	 * Funny. This algorithm seems to be very broken.
++	 * These formulae increase RTO, when it should be decreased, increase
++	 * too slowly, when it should be increased quickly, decrease too quickly
++	 * etc. I guess in BSD RTO takes ONE value, so that it is absolutely
++	 * does not matter how to _calculate_ it. Seems, it was trap
++	 * that VJ failed to avoid. 8)
++	 */
++	if (srtt != 0) {
++		m -= (srtt >> 3);	/* m is now error in rtt est */
++		srtt += m;		/* rtt = 7/8 rtt + 1/8 new */
++		if (m < 0) {
++			m = -m;		/* m is now abs(error) */
++			m -= (peer->mdev_us >> 2);   /* similar update on mdev */
++			/* This is similar to one of Eifel findings.
++			 * Eifel blocks mdev updates when rtt decreases.
++			 * This solution is a bit different: we use finer gain
++			 * for mdev in this case (alpha*beta).
++			 * Like Eifel it also prevents growth of rto,
++			 * but also it limits too fast rto decreases,
++			 * happening in pure Eifel.
++			 */
++			if (m > 0)
++				m >>= 3;
++		} else {
++			m -= (peer->mdev_us >> 2);   /* similar update on mdev */
++		}
++
++		peer->mdev_us += m;		/* mdev = 3/4 mdev + 1/4 new */
++		if (peer->mdev_us > peer->mdev_max_us) {
++			peer->mdev_max_us = peer->mdev_us;
++			if (peer->mdev_max_us > peer->rttvar_us)
++				peer->rttvar_us = peer->mdev_max_us;
++		}
++	} else {
++		/* no previous measure. */
++		srtt = m << 3;		/* take the measured time to be rtt */
++		peer->mdev_us = m << 1;	/* make sure rto = 3*rtt */
++		peer->rttvar_us = max(peer->mdev_us, rxrpc_rto_min_us(peer));
++		peer->mdev_max_us = peer->rttvar_us;
++	}
++
++	peer->srtt_us = max(1U, srtt);
++}
++
++/*
++ * Calculate rto without backoff.  This is the second half of Van Jacobson's
++ * routine referred to above.
++ */
++static void rxrpc_set_rto(struct rxrpc_peer *peer)
++{
++	u32 rto;
++
++	/* 1. If rtt variance happened to be less 50msec, it is hallucination.
++	 *    It cannot be less due to utterly erratic ACK generation made
++	 *    at least by solaris and freebsd. "Erratic ACKs" has _nothing_
++	 *    to do with delayed acks, because at cwnd>2 true delack timeout
++	 *    is invisible. Actually, Linux-2.4 also generates erratic
++	 *    ACKs in some circumstances.
++	 */
++	rto = __rxrpc_set_rto(peer);
++
++	/* 2. Fixups made earlier cannot be right.
++	 *    If we do not estimate RTO correctly without them,
++	 *    all the algo is pure shit and should be replaced
++	 *    with correct one. It is exactly, which we pretend to do.
++	 */
++
++	/* NOTE: clamping at RXRPC_RTO_MIN is not required, current algo
++	 * guarantees that rto is higher.
++	 */
++	peer->rto_j = rxrpc_bound_rto(rto);
++}
++
++static void rxrpc_ack_update_rtt(struct rxrpc_peer *peer, long rtt_us)
++{
++	if (rtt_us < 0)
++		return;
++
++	//rxrpc_update_rtt_min(peer, rtt_us);
++	rxrpc_rtt_estimator(peer, rtt_us);
++	rxrpc_set_rto(peer);
++
++	/* RFC6298: only reset backoff on valid RTT measurement. */
++	peer->backoff = 0;
++}
++
++/*
++ * Add RTT information to cache.  This is called in softirq mode and has
++ * exclusive access to the peer RTT data.
++ */
++void rxrpc_peer_add_rtt(struct rxrpc_call *call, enum rxrpc_rtt_rx_trace why,
++			rxrpc_serial_t send_serial, rxrpc_serial_t resp_serial,
++			ktime_t send_time, ktime_t resp_time)
++{
++	struct rxrpc_peer *peer = call->peer;
++	s64 rtt_us;
++
++	rtt_us = ktime_to_us(ktime_sub(resp_time, send_time));
++	if (rtt_us < 0)
++		return;
++
++	spin_lock(&peer->rtt_input_lock);
++	rxrpc_ack_update_rtt(peer, rtt_us);
++	if (peer->rtt_count < 3)
++		peer->rtt_count++;
++	spin_unlock(&peer->rtt_input_lock);
++
++	trace_rxrpc_rtt_rx(call, why, send_serial, resp_serial,
++			   peer->srtt_us >> 3, peer->rto_j);
++}
++
++/*
++ * Get the retransmission timeout to set in jiffies, backing it off each time
++ * we retransmit.
++ */
++unsigned long rxrpc_get_rto_backoff(struct rxrpc_peer *peer, bool retrans)
++{
++	u64 timo_j;
++	u8 backoff = READ_ONCE(peer->backoff);
++
++	timo_j = peer->rto_j;
++	timo_j <<= backoff;
++	if (retrans && timo_j * 2 <= RXRPC_RTO_MAX)
++		WRITE_ONCE(peer->backoff, backoff + 1);
++
++	if (timo_j < 1)
++		timo_j = 1;
++
++	return timo_j;
++}
++
++void rxrpc_peer_init_rtt(struct rxrpc_peer *peer)
++{
++	peer->rto_j	= RXRPC_TIMEOUT_INIT;
++	peer->mdev_us	= jiffies_to_usecs(RXRPC_TIMEOUT_INIT);
++	peer->backoff	= 0;
++	//minmax_reset(&peer->rtt_min, rxrpc_jiffies32, ~0U);
++}
+diff --git a/net/rxrpc/rxkad.c b/net/rxrpc/rxkad.c
+index 098f1f9ec53b..52a24d4ef5d8 100644
+--- a/net/rxrpc/rxkad.c
++++ b/net/rxrpc/rxkad.c
+@@ -1148,7 +1148,7 @@ static int rxkad_verify_response(struct rxrpc_connection *conn,
+ 	ret = rxkad_decrypt_ticket(conn, skb, ticket, ticket_len, &session_key,
+ 				   &expiry, _abort_code);
+ 	if (ret < 0)
+-		goto temporary_error_free_resp;
++		goto temporary_error_free_ticket;
+ 
+ 	/* use the session key from inside the ticket to decrypt the
+ 	 * response */
+@@ -1230,7 +1230,6 @@ protocol_error:
+ 
+ temporary_error_free_ticket:
+ 	kfree(ticket);
+-temporary_error_free_resp:
+ 	kfree(response);
+ temporary_error:
+ 	/* Ignore the response packet if we got a temporary error such as
+diff --git a/net/rxrpc/sendmsg.c b/net/rxrpc/sendmsg.c
+index 0fcf157aa09f..5e9c43d4a314 100644
+--- a/net/rxrpc/sendmsg.c
++++ b/net/rxrpc/sendmsg.c
+@@ -66,15 +66,14 @@ static int rxrpc_wait_for_tx_window_waitall(struct rxrpc_sock *rx,
+ 					    struct rxrpc_call *call)
+ {
+ 	rxrpc_seq_t tx_start, tx_win;
+-	signed long rtt2, timeout;
+-	u64 rtt;
++	signed long rtt, timeout;
+ 
+-	rtt = READ_ONCE(call->peer->rtt);
+-	rtt2 = nsecs_to_jiffies64(rtt) * 2;
+-	if (rtt2 < 2)
+-		rtt2 = 2;
++	rtt = READ_ONCE(call->peer->srtt_us) >> 3;
++	rtt = usecs_to_jiffies(rtt) * 2;
++	if (rtt < 2)
++		rtt = 2;
+ 
+-	timeout = rtt2;
++	timeout = rtt;
+ 	tx_start = READ_ONCE(call->tx_hard_ack);
+ 
+ 	for (;;) {
+@@ -92,7 +91,7 @@ static int rxrpc_wait_for_tx_window_waitall(struct rxrpc_sock *rx,
+ 			return -EINTR;
+ 
+ 		if (tx_win != tx_start) {
+-			timeout = rtt2;
++			timeout = rtt;
+ 			tx_start = tx_win;
+ 		}
+ 
+@@ -271,16 +270,9 @@ static int rxrpc_queue_packet(struct rxrpc_sock *rx, struct rxrpc_call *call,
+ 		_debug("need instant resend %d", ret);
+ 		rxrpc_instant_resend(call, ix);
+ 	} else {
+-		unsigned long now = jiffies, resend_at;
++		unsigned long now = jiffies;
++		unsigned long resend_at = now + call->peer->rto_j;
+ 
+-		if (call->peer->rtt_usage > 1)
+-			resend_at = nsecs_to_jiffies(call->peer->rtt * 3 / 2);
+-		else
+-			resend_at = rxrpc_resend_timeout;
+-		if (resend_at < 1)
+-			resend_at = 1;
+-
+-		resend_at += now;
+ 		WRITE_ONCE(call->resend_at, resend_at);
+ 		rxrpc_reduce_call_timer(call, resend_at, now,
+ 					rxrpc_timer_set_for_send);
+diff --git a/net/rxrpc/sysctl.c b/net/rxrpc/sysctl.c
+index 2bbb38161851..18dade4e6f9a 100644
+--- a/net/rxrpc/sysctl.c
++++ b/net/rxrpc/sysctl.c
+@@ -71,15 +71,6 @@ static struct ctl_table rxrpc_sysctl_table[] = {
+ 		.extra1		= (void *)&one_jiffy,
+ 		.extra2		= (void *)&max_jiffies,
+ 	},
+-	{
+-		.procname	= "resend_timeout",
+-		.data		= &rxrpc_resend_timeout,
+-		.maxlen		= sizeof(unsigned long),
+-		.mode		= 0644,
+-		.proc_handler	= proc_doulongvec_ms_jiffies_minmax,
+-		.extra1		= (void *)&one_jiffy,
+-		.extra2		= (void *)&max_jiffies,
+-	},
+ 
+ 	/* Non-time values */
+ 	{
+diff --git a/scripts/gcc-plugins/Makefile b/scripts/gcc-plugins/Makefile
+index f2ee8bd7abc6..1d0b9382e759 100644
+--- a/scripts/gcc-plugins/Makefile
++++ b/scripts/gcc-plugins/Makefile
+@@ -11,6 +11,7 @@ else
+   HOST_EXTRACXXFLAGS += -I$(GCC_PLUGINS_DIR)/include -I$(src) -std=gnu++98 -fno-rtti
+   HOST_EXTRACXXFLAGS += -fno-exceptions -fasynchronous-unwind-tables -ggdb
+   HOST_EXTRACXXFLAGS += -Wno-narrowing -Wno-unused-variable
++  HOST_EXTRACXXFLAGS += -Wno-format-diag
+   export HOST_EXTRACXXFLAGS
+ endif
+ 
+diff --git a/scripts/gcc-plugins/gcc-common.h b/scripts/gcc-plugins/gcc-common.h
+index 17f06079a712..9ad76b7f3f10 100644
+--- a/scripts/gcc-plugins/gcc-common.h
++++ b/scripts/gcc-plugins/gcc-common.h
+@@ -35,7 +35,9 @@
+ #include "ggc.h"
+ #include "timevar.h"
+ 
++#if BUILDING_GCC_VERSION < 10000
+ #include "params.h"
++#endif
+ 
+ #if BUILDING_GCC_VERSION <= 4009
+ #include "pointer-set.h"
+@@ -847,6 +849,7 @@ static inline gimple gimple_build_assign_with_ops(enum tree_code subcode, tree l
+ 	return gimple_build_assign(lhs, subcode, op1, op2 PASS_MEM_STAT);
+ }
+ 
++#if BUILDING_GCC_VERSION < 10000
+ template <>
+ template <>
+ inline bool is_a_helper<const ggoto *>::test(const_gimple gs)
+@@ -860,6 +863,7 @@ inline bool is_a_helper<const greturn *>::test(const_gimple gs)
+ {
+ 	return gs->code == GIMPLE_RETURN;
+ }
++#endif
+ 
+ static inline gasm *as_a_gasm(gimple stmt)
+ {
+diff --git a/scripts/gdb/linux/rbtree.py b/scripts/gdb/linux/rbtree.py
+index 39db889b874c..c4b991607917 100644
+--- a/scripts/gdb/linux/rbtree.py
++++ b/scripts/gdb/linux/rbtree.py
+@@ -12,7 +12,7 @@ rb_node_type = utils.CachedType("struct rb_node")
+ 
+ def rb_first(root):
+     if root.type == rb_root_type.get_type():
+-        node = node.address.cast(rb_root_type.get_type().pointer())
++        node = root.address.cast(rb_root_type.get_type().pointer())
+     elif root.type != rb_root_type.get_type().pointer():
+         raise gdb.GdbError("Must be struct rb_root not {}".format(root.type))
+ 
+@@ -28,7 +28,7 @@ def rb_first(root):
+ 
+ def rb_last(root):
+     if root.type == rb_root_type.get_type():
+-        node = node.address.cast(rb_root_type.get_type().pointer())
++        node = root.address.cast(rb_root_type.get_type().pointer())
+     elif root.type != rb_root_type.get_type().pointer():
+         raise gdb.GdbError("Must be struct rb_root not {}".format(root.type))
+ 
+diff --git a/scripts/link-vmlinux.sh b/scripts/link-vmlinux.sh
+index dd484e92752e..ac569e197bfa 100755
+--- a/scripts/link-vmlinux.sh
++++ b/scripts/link-vmlinux.sh
+@@ -63,12 +63,18 @@ vmlinux_link()
+ 	local lds="${objtree}/${KBUILD_LDS}"
+ 	local output=${1}
+ 	local objects
++	local strip_debug
+ 
+ 	info LD ${output}
+ 
+ 	# skip output file argument
+ 	shift
+ 
++	# The kallsyms linking does not need debug symbols included.
++	if [ "$output" != "${output#.tmp_vmlinux.kallsyms}" ] ; then
++		strip_debug=-Wl,--strip-debug
++	fi
++
+ 	if [ "${SRCARCH}" != "um" ]; then
+ 		objects="--whole-archive			\
+ 			${KBUILD_VMLINUX_OBJS}			\
+@@ -79,6 +85,7 @@ vmlinux_link()
+ 			${@}"
+ 
+ 		${LD} ${KBUILD_LDFLAGS} ${LDFLAGS_vmlinux}	\
++			${strip_debug#-Wl,}			\
+ 			-o ${output}				\
+ 			-T ${lds} ${objects}
+ 	else
+@@ -91,6 +98,7 @@ vmlinux_link()
+ 			${@}"
+ 
+ 		${CC} ${CFLAGS_vmlinux}				\
++			${strip_debug}				\
+ 			-o ${output}				\
+ 			-Wl,-T,${lds}				\
+ 			${objects}				\
+@@ -106,6 +114,8 @@ gen_btf()
+ {
+ 	local pahole_ver
+ 	local bin_arch
++	local bin_format
++	local bin_file
+ 
+ 	if ! [ -x "$(command -v ${PAHOLE})" ]; then
+ 		echo >&2 "BTF: ${1}: pahole (${PAHOLE}) is not available"
+@@ -118,8 +128,9 @@ gen_btf()
+ 		return 1
+ 	fi
+ 
+-	info "BTF" ${2}
+ 	vmlinux_link ${1}
++
++	info "BTF" ${2}
+ 	LLVM_OBJCOPY=${OBJCOPY} ${PAHOLE} -J ${1}
+ 
+ 	# dump .BTF section into raw binary file to link with final vmlinux
+@@ -127,11 +138,12 @@ gen_btf()
+ 		cut -d, -f1 | cut -d' ' -f2)
+ 	bin_format=$(LANG=C ${OBJDUMP} -f ${1} | grep 'file format' | \
+ 		awk '{print $4}')
++	bin_file=.btf.vmlinux.bin
+ 	${OBJCOPY} --change-section-address .BTF=0 \
+ 		--set-section-flags .BTF=alloc -O binary \
+-		--only-section=.BTF ${1} .btf.vmlinux.bin
++		--only-section=.BTF ${1} $bin_file
+ 	${OBJCOPY} -I binary -O ${bin_format} -B ${bin_arch} \
+-		--rename-section .data=.BTF .btf.vmlinux.bin ${2}
++		--rename-section .data=.BTF $bin_file ${2}
+ }
+ 
+ # Create ${2} .o file with all symbols from the ${1} object file
+@@ -166,8 +178,8 @@ kallsyms()
+ kallsyms_step()
+ {
+ 	kallsymso_prev=${kallsymso}
+-	kallsymso=.tmp_kallsyms${1}.o
+-	kallsyms_vmlinux=.tmp_vmlinux${1}
++	kallsyms_vmlinux=.tmp_vmlinux.kallsyms${1}
++	kallsymso=${kallsyms_vmlinux}.o
+ 
+ 	vmlinux_link ${kallsyms_vmlinux} "${kallsymso_prev}" ${btf_vmlinux_bin_o}
+ 	kallsyms ${kallsyms_vmlinux} ${kallsymso}
+@@ -190,7 +202,6 @@ cleanup()
+ {
+ 	rm -f .btf.*
+ 	rm -f .tmp_System.map
+-	rm -f .tmp_kallsyms*
+ 	rm -f .tmp_vmlinux*
+ 	rm -f System.map
+ 	rm -f vmlinux
+@@ -257,9 +268,8 @@ tr '\0' '\n' < modules.builtin.modinfo | sed -n 's/^[[:alnum:]:_]*\.file=//p' |
+ 
+ btf_vmlinux_bin_o=""
+ if [ -n "${CONFIG_DEBUG_INFO_BTF}" ]; then
+-	if gen_btf .tmp_vmlinux.btf .btf.vmlinux.bin.o ; then
+-		btf_vmlinux_bin_o=.btf.vmlinux.bin.o
+-	else
++	btf_vmlinux_bin_o=.btf.vmlinux.bin.o
++	if ! gen_btf .tmp_vmlinux.btf $btf_vmlinux_bin_o ; then
+ 		echo >&2 "Failed to generate BTF for vmlinux"
+ 		echo >&2 "Try to disable CONFIG_DEBUG_INFO_BTF"
+ 		exit 1
+diff --git a/security/apparmor/apparmorfs.c b/security/apparmor/apparmorfs.c
+index 280741fc0f5f..f6a3ecfadf80 100644
+--- a/security/apparmor/apparmorfs.c
++++ b/security/apparmor/apparmorfs.c
+@@ -454,7 +454,7 @@ static ssize_t policy_update(u32 mask, const char __user *buf, size_t size,
+ 	 */
+ 	error = aa_may_manage_policy(label, ns, mask);
+ 	if (error)
+-		return error;
++		goto end_section;
+ 
+ 	data = aa_simple_write_to_buffer(buf, size, size, pos);
+ 	error = PTR_ERR(data);
+@@ -462,6 +462,7 @@ static ssize_t policy_update(u32 mask, const char __user *buf, size_t size,
+ 		error = aa_replace_profiles(ns, label, mask, data);
+ 		aa_put_loaddata(data);
+ 	}
++end_section:
+ 	end_current_label_crit_section(label);
+ 
+ 	return error;
+diff --git a/security/apparmor/audit.c b/security/apparmor/audit.c
+index 5a98661a8b46..597732503815 100644
+--- a/security/apparmor/audit.c
++++ b/security/apparmor/audit.c
+@@ -197,8 +197,9 @@ int aa_audit_rule_init(u32 field, u32 op, char *rulestr, void **vrule)
+ 	rule->label = aa_label_parse(&root_ns->unconfined->label, rulestr,
+ 				     GFP_KERNEL, true, false);
+ 	if (IS_ERR(rule->label)) {
++		int err = PTR_ERR(rule->label);
+ 		aa_audit_rule_free(rule);
+-		return PTR_ERR(rule->label);
++		return err;
+ 	}
+ 
+ 	*vrule = rule;
+diff --git a/security/apparmor/domain.c b/security/apparmor/domain.c
+index 6ceb74e0f789..a84ef030fbd7 100644
+--- a/security/apparmor/domain.c
++++ b/security/apparmor/domain.c
+@@ -1328,6 +1328,7 @@ int aa_change_profile(const char *fqname, int flags)
+ 		ctx->nnp = aa_get_label(label);
+ 
+ 	if (!fqname || !*fqname) {
++		aa_put_label(label);
+ 		AA_DEBUG("no profile name");
+ 		return -EINVAL;
+ 	}
+@@ -1346,8 +1347,6 @@ int aa_change_profile(const char *fqname, int flags)
+ 			op = OP_CHANGE_PROFILE;
+ 	}
+ 
+-	label = aa_get_current_label();
+-
+ 	if (*fqname == '&') {
+ 		stack = true;
+ 		/* don't have label_parse() do stacking */
+diff --git a/security/integrity/evm/evm_crypto.c b/security/integrity/evm/evm_crypto.c
+index d485f6fc908e..cc826c2767a3 100644
+--- a/security/integrity/evm/evm_crypto.c
++++ b/security/integrity/evm/evm_crypto.c
+@@ -75,7 +75,7 @@ static struct shash_desc *init_desc(char type, uint8_t hash_algo)
+ {
+ 	long rc;
+ 	const char *algo;
+-	struct crypto_shash **tfm;
++	struct crypto_shash **tfm, *tmp_tfm;
+ 	struct shash_desc *desc;
+ 
+ 	if (type == EVM_XATTR_HMAC) {
+@@ -93,31 +93,31 @@ static struct shash_desc *init_desc(char type, uint8_t hash_algo)
+ 		algo = hash_algo_name[hash_algo];
+ 	}
+ 
+-	if (*tfm == NULL) {
+-		mutex_lock(&mutex);
+-		if (*tfm)
+-			goto out;
+-		*tfm = crypto_alloc_shash(algo, 0, CRYPTO_NOLOAD);
+-		if (IS_ERR(*tfm)) {
+-			rc = PTR_ERR(*tfm);
+-			pr_err("Can not allocate %s (reason: %ld)\n", algo, rc);
+-			*tfm = NULL;
++	if (*tfm)
++		goto alloc;
++	mutex_lock(&mutex);
++	if (*tfm)
++		goto unlock;
++
++	tmp_tfm = crypto_alloc_shash(algo, 0, CRYPTO_NOLOAD);
++	if (IS_ERR(tmp_tfm)) {
++		pr_err("Can not allocate %s (reason: %ld)\n", algo,
++		       PTR_ERR(tmp_tfm));
++		mutex_unlock(&mutex);
++		return ERR_CAST(tmp_tfm);
++	}
++	if (type == EVM_XATTR_HMAC) {
++		rc = crypto_shash_setkey(tmp_tfm, evmkey, evmkey_len);
++		if (rc) {
++			crypto_free_shash(tmp_tfm);
+ 			mutex_unlock(&mutex);
+ 			return ERR_PTR(rc);
+ 		}
+-		if (type == EVM_XATTR_HMAC) {
+-			rc = crypto_shash_setkey(*tfm, evmkey, evmkey_len);
+-			if (rc) {
+-				crypto_free_shash(*tfm);
+-				*tfm = NULL;
+-				mutex_unlock(&mutex);
+-				return ERR_PTR(rc);
+-			}
+-		}
+-out:
+-		mutex_unlock(&mutex);
+ 	}
+-
++	*tfm = tmp_tfm;
++unlock:
++	mutex_unlock(&mutex);
++alloc:
+ 	desc = kmalloc(sizeof(*desc) + crypto_shash_descsize(*tfm),
+ 			GFP_KERNEL);
+ 	if (!desc)
+diff --git a/security/integrity/ima/ima_crypto.c b/security/integrity/ima/ima_crypto.c
+index 7967a6904851..e8fa23cd4a6c 100644
+--- a/security/integrity/ima/ima_crypto.c
++++ b/security/integrity/ima/ima_crypto.c
+@@ -413,7 +413,7 @@ int ima_calc_file_hash(struct file *file, struct ima_digest_data *hash)
+ 	loff_t i_size;
+ 	int rc;
+ 	struct file *f = file;
+-	bool new_file_instance = false, modified_flags = false;
++	bool new_file_instance = false, modified_mode = false;
+ 
+ 	/*
+ 	 * For consistency, fail file's opened with the O_DIRECT flag on
+@@ -433,13 +433,13 @@ int ima_calc_file_hash(struct file *file, struct ima_digest_data *hash)
+ 		f = dentry_open(&file->f_path, flags, file->f_cred);
+ 		if (IS_ERR(f)) {
+ 			/*
+-			 * Cannot open the file again, lets modify f_flags
++			 * Cannot open the file again, lets modify f_mode
+ 			 * of original and continue
+ 			 */
+ 			pr_info_ratelimited("Unable to reopen file for reading.\n");
+ 			f = file;
+-			f->f_flags |= FMODE_READ;
+-			modified_flags = true;
++			f->f_mode |= FMODE_READ;
++			modified_mode = true;
+ 		} else {
+ 			new_file_instance = true;
+ 		}
+@@ -457,8 +457,8 @@ int ima_calc_file_hash(struct file *file, struct ima_digest_data *hash)
+ out:
+ 	if (new_file_instance)
+ 		fput(f);
+-	else if (modified_flags)
+-		f->f_flags &= ~FMODE_READ;
++	else if (modified_mode)
++		f->f_mode &= ~FMODE_READ;
+ 	return rc;
+ }
+ 
+diff --git a/security/integrity/ima/ima_fs.c b/security/integrity/ima/ima_fs.c
+index 2000e8df0301..68571c40d61f 100644
+--- a/security/integrity/ima/ima_fs.c
++++ b/security/integrity/ima/ima_fs.c
+@@ -340,8 +340,7 @@ static ssize_t ima_write_policy(struct file *file, const char __user *buf,
+ 		integrity_audit_msg(AUDIT_INTEGRITY_STATUS, NULL, NULL,
+ 				    "policy_update", "signed policy required",
+ 				    1, 0);
+-		if (ima_appraise & IMA_APPRAISE_ENFORCE)
+-			result = -EACCES;
++		result = -EACCES;
+ 	} else {
+ 		result = ima_parse_add_rule(data);
+ 	}
+diff --git a/sound/core/pcm_lib.c b/sound/core/pcm_lib.c
+index 872a852de75c..d531e1bc2b81 100644
+--- a/sound/core/pcm_lib.c
++++ b/sound/core/pcm_lib.c
+@@ -433,6 +433,7 @@ static int snd_pcm_update_hw_ptr0(struct snd_pcm_substream *substream,
+ 
+  no_delta_check:
+ 	if (runtime->status->hw_ptr == new_hw_ptr) {
++		runtime->hw_ptr_jiffies = curr_jiffies;
+ 		update_audio_tstamp(substream, &curr_tstamp, &audio_tstamp);
+ 		return 0;
+ 	}
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index d73c814358bf..041d2a32059b 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -86,6 +86,14 @@ struct alc_spec {
+ 
+ 	unsigned int gpio_mute_led_mask;
+ 	unsigned int gpio_mic_led_mask;
++	unsigned int mute_led_coef_idx;
++	unsigned int mute_led_coefbit_mask;
++	unsigned int mute_led_coefbit_on;
++	unsigned int mute_led_coefbit_off;
++	unsigned int mic_led_coef_idx;
++	unsigned int mic_led_coefbit_mask;
++	unsigned int mic_led_coefbit_on;
++	unsigned int mic_led_coefbit_off;
+ 
+ 	hda_nid_t headset_mic_pin;
+ 	hda_nid_t headphone_mic_pin;
+@@ -2449,6 +2457,7 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1458, 0xa002, "Gigabyte EP45-DS3/Z87X-UD3H", ALC889_FIXUP_FRONT_HP_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1458, 0xa0b8, "Gigabyte AZ370-Gaming", ALC1220_FIXUP_GB_DUAL_CODECS),
+ 	SND_PCI_QUIRK(0x1458, 0xa0cd, "Gigabyte X570 Aorus Master", ALC1220_FIXUP_CLEVO_P950),
++	SND_PCI_QUIRK(0x1458, 0xa0ce, "Gigabyte X570 Aorus Xtreme", ALC1220_FIXUP_CLEVO_P950),
+ 	SND_PCI_QUIRK(0x1462, 0x1228, "MSI-GP63", ALC1220_FIXUP_CLEVO_P950),
+ 	SND_PCI_QUIRK(0x1462, 0x1275, "MSI-GL63", ALC1220_FIXUP_CLEVO_P950),
+ 	SND_PCI_QUIRK(0x1462, 0x1276, "MSI-GL73", ALC1220_FIXUP_CLEVO_P950),
+@@ -2464,6 +2473,9 @@ static const struct snd_pci_quirk alc882_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1558, 0x97e1, "Clevo P970[ER][CDFN]", ALC1220_FIXUP_CLEVO_P950),
+ 	SND_PCI_QUIRK(0x1558, 0x65d1, "Clevo PB51[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ 	SND_PCI_QUIRK(0x1558, 0x67d1, "Clevo PB71[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
++	SND_PCI_QUIRK(0x1558, 0x50d3, "Clevo PC50[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
++	SND_PCI_QUIRK(0x1558, 0x70d1, "Clevo PC70[ER][CDF]", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
++	SND_PCI_QUIRK(0x1558, 0x7714, "Clevo X170", ALC1220_FIXUP_CLEVO_PB51ED_PINS),
+ 	SND_PCI_QUIRK_VENDOR(0x1558, "Clevo laptop", ALC882_FIXUP_EAPD),
+ 	SND_PCI_QUIRK(0x161f, 0x2054, "Medion laptop", ALC883_FIXUP_EAPD),
+ 	SND_PCI_QUIRK(0x17aa, 0x3a0d, "Lenovo Y530", ALC882_FIXUP_LENOVO_Y530),
+@@ -4182,6 +4194,111 @@ static void alc280_fixup_hp_gpio4(struct hda_codec *codec,
+ 	}
+ }
+ 
++/* update mute-LED according to the speaker mute state via COEF bit */
++static void alc_fixup_mute_led_coefbit_hook(void *private_data, int enabled)
++{
++	struct hda_codec *codec = private_data;
++	struct alc_spec *spec = codec->spec;
++
++	if (spec->mute_led_polarity)
++		enabled = !enabled;
++
++	/* temporarily power up/down for setting COEF bit */
++	enabled ? alc_update_coef_idx(codec, spec->mute_led_coef_idx,
++		spec->mute_led_coefbit_mask, spec->mute_led_coefbit_off) :
++		  alc_update_coef_idx(codec, spec->mute_led_coef_idx,
++		spec->mute_led_coefbit_mask, spec->mute_led_coefbit_on);
++}
++
++static void alc285_fixup_hp_mute_led_coefbit(struct hda_codec *codec,
++					  const struct hda_fixup *fix,
++					  int action)
++{
++	struct alc_spec *spec = codec->spec;
++
++	if (action == HDA_FIXUP_ACT_PRE_PROBE) {
++		spec->mute_led_polarity = 0;
++		spec->mute_led_coef_idx = 0x0b;
++		spec->mute_led_coefbit_mask = 1<<3;
++		spec->mute_led_coefbit_on = 1<<3;
++		spec->mute_led_coefbit_off = 0;
++		spec->gen.vmaster_mute.hook = alc_fixup_mute_led_coefbit_hook;
++		spec->gen.vmaster_mute_enum = 1;
++	}
++}
++
++static void alc236_fixup_hp_mute_led_coefbit(struct hda_codec *codec,
++					  const struct hda_fixup *fix,
++					  int action)
++{
++	struct alc_spec *spec = codec->spec;
++
++	if (action == HDA_FIXUP_ACT_PRE_PROBE) {
++		spec->mute_led_polarity = 0;
++		spec->mute_led_coef_idx = 0x34;
++		spec->mute_led_coefbit_mask = 1<<5;
++		spec->mute_led_coefbit_on = 0;
++		spec->mute_led_coefbit_off = 1<<5;
++		spec->gen.vmaster_mute.hook = alc_fixup_mute_led_coefbit_hook;
++		spec->gen.vmaster_mute_enum = 1;
++	}
++}
++
++/* turn on/off mic-mute LED per capture hook by coef bit */
++static void alc_hp_cap_micmute_update(struct hda_codec *codec)
++{
++	struct alc_spec *spec = codec->spec;
++
++	if (spec->gen.micmute_led.led_value)
++		alc_update_coef_idx(codec, spec->mic_led_coef_idx,
++			spec->mic_led_coefbit_mask, spec->mic_led_coefbit_on);
++	else
++		alc_update_coef_idx(codec, spec->mic_led_coef_idx,
++			spec->mic_led_coefbit_mask, spec->mic_led_coefbit_off);
++}
++
++static void alc285_fixup_hp_coef_micmute_led(struct hda_codec *codec,
++				const struct hda_fixup *fix, int action)
++{
++	struct alc_spec *spec = codec->spec;
++
++	if (action == HDA_FIXUP_ACT_PRE_PROBE) {
++		spec->mic_led_coef_idx = 0x19;
++		spec->mic_led_coefbit_mask = 1<<13;
++		spec->mic_led_coefbit_on = 1<<13;
++		spec->mic_led_coefbit_off = 0;
++		snd_hda_gen_add_micmute_led(codec, alc_hp_cap_micmute_update);
++	}
++}
++
++static void alc236_fixup_hp_coef_micmute_led(struct hda_codec *codec,
++				const struct hda_fixup *fix, int action)
++{
++	struct alc_spec *spec = codec->spec;
++
++	if (action == HDA_FIXUP_ACT_PRE_PROBE) {
++		spec->mic_led_coef_idx = 0x35;
++		spec->mic_led_coefbit_mask = 3<<2;
++		spec->mic_led_coefbit_on = 2<<2;
++		spec->mic_led_coefbit_off = 1<<2;
++		snd_hda_gen_add_micmute_led(codec, alc_hp_cap_micmute_update);
++	}
++}
++
++static void alc285_fixup_hp_mute_led(struct hda_codec *codec,
++				const struct hda_fixup *fix, int action)
++{
++	alc285_fixup_hp_mute_led_coefbit(codec, fix, action);
++	alc285_fixup_hp_coef_micmute_led(codec, fix, action);
++}
++
++static void alc236_fixup_hp_mute_led(struct hda_codec *codec,
++				const struct hda_fixup *fix, int action)
++{
++	alc236_fixup_hp_mute_led_coefbit(codec, fix, action);
++	alc236_fixup_hp_coef_micmute_led(codec, fix, action);
++}
++
+ #if IS_REACHABLE(CONFIG_INPUT)
+ static void gpio2_mic_hotkey_event(struct hda_codec *codec,
+ 				   struct hda_jack_callback *event)
+@@ -5980,6 +6097,10 @@ enum {
+ 	ALC294_FIXUP_ASUS_HPE,
+ 	ALC294_FIXUP_ASUS_COEF_1B,
+ 	ALC285_FIXUP_HP_GPIO_LED,
++	ALC285_FIXUP_HP_MUTE_LED,
++	ALC236_FIXUP_HP_MUTE_LED,
++	ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET,
++	ALC295_FIXUP_ASUS_MIC_NO_PRESENCE,
+ };
+ 
+ static const struct hda_fixup alc269_fixups[] = {
+@@ -7128,6 +7249,30 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc285_fixup_hp_gpio_led,
+ 	},
++	[ALC285_FIXUP_HP_MUTE_LED] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc285_fixup_hp_mute_led,
++	},
++	[ALC236_FIXUP_HP_MUTE_LED] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc236_fixup_hp_mute_led,
++	},
++	[ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET] = {
++		.type = HDA_FIXUP_VERBS,
++		.v.verbs = (const struct hda_verb[]) {
++			{ 0x1a, AC_VERB_SET_PIN_WIDGET_CONTROL, 0xc5 },
++			{ }
++		},
++	},
++	[ALC295_FIXUP_ASUS_MIC_NO_PRESENCE] = {
++		.type = HDA_FIXUP_PINS,
++		.v.pins = (const struct hda_pintbl[]) {
++			{ 0x19, 0x01a1913c }, /* use as headset mic, without its own jack detect */
++			{ }
++		},
++		.chained = true,
++		.chain_id = ALC269_FIXUP_HEADSET_MODE
++	},
+ };
+ 
+ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+@@ -7273,6 +7418,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x103c, 0x8497, "HP Envy x360", ALC269_FIXUP_HP_MUTE_LED_MIC3),
+ 	SND_PCI_QUIRK(0x103c, 0x84e7, "HP Pavilion 15", ALC269_FIXUP_HP_MUTE_LED_MIC3),
+ 	SND_PCI_QUIRK(0x103c, 0x8736, "HP", ALC285_FIXUP_HP_GPIO_LED),
++	SND_PCI_QUIRK(0x103c, 0x877a, "HP", ALC285_FIXUP_HP_MUTE_LED),
++	SND_PCI_QUIRK(0x103c, 0x877d, "HP", ALC236_FIXUP_HP_MUTE_LED),
+ 	SND_PCI_QUIRK(0x1043, 0x103e, "ASUS X540SA", ALC256_FIXUP_ASUS_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x103f, "ASUS TX300", ALC282_FIXUP_ASUS_TX300),
+ 	SND_PCI_QUIRK(0x1043, 0x106d, "Asus K53BE", ALC269_FIXUP_LIMIT_INT_MIC_BOOST),
+@@ -7293,6 +7440,7 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x1043, 0x18b1, "Asus MJ401TA", ALC256_FIXUP_ASUS_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x18f1, "Asus FX505DT", ALC256_FIXUP_ASUS_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x19ce, "ASUS B9450FA", ALC294_FIXUP_ASUS_HPE),
++	SND_PCI_QUIRK(0x1043, 0x19e1, "ASUS UX581LV", ALC295_FIXUP_ASUS_MIC_NO_PRESENCE),
+ 	SND_PCI_QUIRK(0x1043, 0x1a13, "Asus G73Jw", ALC269_FIXUP_ASUS_G73JW),
+ 	SND_PCI_QUIRK(0x1043, 0x1a30, "ASUS X705UD", ALC256_FIXUP_ASUS_MIC),
+ 	SND_PCI_QUIRK(0x1043, 0x1b11, "ASUS UX431DA", ALC294_FIXUP_ASUS_COEF_1B),
+@@ -7321,6 +7469,8 @@ static const struct snd_pci_quirk alc269_fixup_tbl[] = {
+ 	SND_PCI_QUIRK(0x10ec, 0x10f2, "Intel Reference board", ALC700_FIXUP_INTEL_REFERENCE),
+ 	SND_PCI_QUIRK(0x10f7, 0x8338, "Panasonic CF-SZ6", ALC269_FIXUP_HEADSET_MODE),
+ 	SND_PCI_QUIRK(0x144d, 0xc109, "Samsung Ativ book 9 (NP900X3G)", ALC269_FIXUP_INV_DMIC),
++	SND_PCI_QUIRK(0x144d, 0xc169, "Samsung Notebook 9 Pen (NP930SBE-K01US)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
++	SND_PCI_QUIRK(0x144d, 0xc176, "Samsung Notebook 9 Pro (NP930MBE-K04US)", ALC298_FIXUP_SAMSUNG_HEADPHONE_VERY_QUIET),
+ 	SND_PCI_QUIRK(0x144d, 0xc740, "Samsung Ativ book 8 (NP870Z5G)", ALC269_FIXUP_ATIV_BOOK_8),
+ 	SND_PCI_QUIRK(0x1458, 0xfa53, "Gigabyte BXBT-2807", ALC283_FIXUP_HEADSET_MIC),
+ 	SND_PCI_QUIRK(0x1462, 0xb120, "MSI Cubi MS-B120", ALC283_FIXUP_HEADSET_MIC),
+@@ -7905,6 +8055,18 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = {
+ 		{0x12, 0x90a60130},
+ 		{0x17, 0x90170110},
+ 		{0x21, 0x03211020}),
++	SND_HDA_PIN_QUIRK(0x10ec0295, 0x1043, "ASUS", ALC295_FIXUP_ASUS_MIC_NO_PRESENCE,
++		{0x12, 0x90a60120},
++		{0x17, 0x90170110},
++		{0x21, 0x04211030}),
++	SND_HDA_PIN_QUIRK(0x10ec0295, 0x1043, "ASUS", ALC295_FIXUP_ASUS_MIC_NO_PRESENCE,
++		{0x12, 0x90a60130},
++		{0x17, 0x90170110},
++		{0x21, 0x03211020}),
++	SND_HDA_PIN_QUIRK(0x10ec0295, 0x1043, "ASUS", ALC295_FIXUP_ASUS_MIC_NO_PRESENCE,
++		{0x12, 0x90a60130},
++		{0x17, 0x90170110},
++		{0x21, 0x03211020}),
+ 	SND_HDA_PIN_QUIRK(0x10ec0295, 0x1028, "Dell", ALC269_FIXUP_DELL4_MIC_NO_PRESENCE,
+ 		{0x14, 0x90170110},
+ 		{0x21, 0x04211020}),
+diff --git a/sound/pci/ice1712/ice1712.c b/sound/pci/ice1712/ice1712.c
+index 884d0cdec08c..73e1e5400506 100644
+--- a/sound/pci/ice1712/ice1712.c
++++ b/sound/pci/ice1712/ice1712.c
+@@ -2332,7 +2332,8 @@ static int snd_ice1712_chip_init(struct snd_ice1712 *ice)
+ 	pci_write_config_byte(ice->pci, 0x61, ice->eeprom.data[ICE_EEP1_ACLINK]);
+ 	pci_write_config_byte(ice->pci, 0x62, ice->eeprom.data[ICE_EEP1_I2SID]);
+ 	pci_write_config_byte(ice->pci, 0x63, ice->eeprom.data[ICE_EEP1_SPDIF]);
+-	if (ice->eeprom.subvendor != ICE1712_SUBDEVICE_STDSP24) {
++	if (ice->eeprom.subvendor != ICE1712_SUBDEVICE_STDSP24 &&
++	    ice->eeprom.subvendor != ICE1712_SUBDEVICE_STAUDIO_ADCIII) {
+ 		ice->gpio.write_mask = ice->eeprom.gpiomask;
+ 		ice->gpio.direction = ice->eeprom.gpiodir;
+ 		snd_ice1712_write(ice, ICE1712_IREG_GPIO_WRITE_MASK,
+diff --git a/tools/bootconfig/main.c b/tools/bootconfig/main.c
+index a9b97814d1a9..37fb2e85de12 100644
+--- a/tools/bootconfig/main.c
++++ b/tools/bootconfig/main.c
+@@ -287,6 +287,7 @@ int apply_xbc(const char *path, const char *xbc_path)
+ 	ret = delete_xbc(path);
+ 	if (ret < 0) {
+ 		pr_err("Failed to delete previous boot config: %d\n", ret);
++		free(data);
+ 		return ret;
+ 	}
+ 
+@@ -294,24 +295,27 @@ int apply_xbc(const char *path, const char *xbc_path)
+ 	fd = open(path, O_RDWR | O_APPEND);
+ 	if (fd < 0) {
+ 		pr_err("Failed to open %s: %d\n", path, fd);
++		free(data);
+ 		return fd;
+ 	}
+ 	/* TODO: Ensure the @path is initramfs/initrd image */
+ 	ret = write(fd, data, size + 8);
+ 	if (ret < 0) {
+ 		pr_err("Failed to apply a boot config: %d\n", ret);
+-		return ret;
++		goto out;
+ 	}
+ 	/* Write a magic word of the bootconfig */
+ 	ret = write(fd, BOOTCONFIG_MAGIC, BOOTCONFIG_MAGIC_LEN);
+ 	if (ret < 0) {
+ 		pr_err("Failed to apply a boot config magic: %d\n", ret);
+-		return ret;
++		goto out;
+ 	}
++	ret = 0;
++out:
+ 	close(fd);
+ 	free(data);
+ 
+-	return 0;
++	return ret;
+ }
+ 
+ int usage(void)
+diff --git a/tools/testing/selftests/bpf/prog_tests/mmap.c b/tools/testing/selftests/bpf/prog_tests/mmap.c
+index b0e789678aa4..5495b669fccc 100644
+--- a/tools/testing/selftests/bpf/prog_tests/mmap.c
++++ b/tools/testing/selftests/bpf/prog_tests/mmap.c
+@@ -19,7 +19,7 @@ void test_mmap(void)
+ 	const size_t map_sz = roundup_page(sizeof(struct map_data));
+ 	const int zero = 0, one = 1, two = 2, far = 1500;
+ 	const long page_size = sysconf(_SC_PAGE_SIZE);
+-	int err, duration = 0, i, data_map_fd;
++	int err, duration = 0, i, data_map_fd, rdmap_fd;
+ 	struct bpf_map *data_map, *bss_map;
+ 	void *bss_mmaped = NULL, *map_mmaped = NULL, *tmp1, *tmp2;
+ 	struct test_mmap__bss *bss_data;
+@@ -36,6 +36,17 @@ void test_mmap(void)
+ 	data_map = skel->maps.data_map;
+ 	data_map_fd = bpf_map__fd(data_map);
+ 
++	rdmap_fd = bpf_map__fd(skel->maps.rdonly_map);
++	tmp1 = mmap(NULL, 4096, PROT_READ | PROT_WRITE, MAP_SHARED, rdmap_fd, 0);
++	if (CHECK(tmp1 != MAP_FAILED, "rdonly_write_mmap", "unexpected success\n")) {
++		munmap(tmp1, 4096);
++		goto cleanup;
++	}
++	/* now double-check if it's mmap()'able at all */
++	tmp1 = mmap(NULL, 4096, PROT_READ, MAP_SHARED, rdmap_fd, 0);
++	if (CHECK(tmp1 == MAP_FAILED, "rdonly_read_mmap", "failed: %d\n", errno))
++		goto cleanup;
++
+ 	bss_mmaped = mmap(NULL, bss_sz, PROT_READ | PROT_WRITE, MAP_SHARED,
+ 			  bpf_map__fd(bss_map), 0);
+ 	if (CHECK(bss_mmaped == MAP_FAILED, "bss_mmap",
+diff --git a/tools/testing/selftests/bpf/progs/test_mmap.c b/tools/testing/selftests/bpf/progs/test_mmap.c
+index 6239596cd14e..4eb42cff5fe9 100644
+--- a/tools/testing/selftests/bpf/progs/test_mmap.c
++++ b/tools/testing/selftests/bpf/progs/test_mmap.c
+@@ -7,6 +7,14 @@
+ 
+ char _license[] SEC("license") = "GPL";
+ 
++struct {
++	__uint(type, BPF_MAP_TYPE_ARRAY);
++	__uint(max_entries, 4096);
++	__uint(map_flags, BPF_F_MMAPABLE | BPF_F_RDONLY_PROG);
++	__type(key, __u32);
++	__type(value, char);
++} rdonly_map SEC(".maps");
++
+ struct {
+ 	__uint(type, BPF_MAP_TYPE_ARRAY);
+ 	__uint(max_entries, 512 * 4); /* at least 4 pages of data */
+diff --git a/tools/testing/selftests/ftrace/ftracetest b/tools/testing/selftests/ftrace/ftracetest
+index 144308a757b7..19e9236dec5e 100755
+--- a/tools/testing/selftests/ftrace/ftracetest
++++ b/tools/testing/selftests/ftrace/ftracetest
+@@ -17,6 +17,7 @@ echo "		-v|--verbose Increase verbosity of test messages"
+ echo "		-vv        Alias of -v -v (Show all results in stdout)"
+ echo "		-vvv       Alias of -v -v -v (Show all commands immediately)"
+ echo "		--fail-unsupported Treat UNSUPPORTED as a failure"
++echo "		--fail-unresolved Treat UNRESOLVED as a failure"
+ echo "		-d|--debug Debug mode (trace all shell commands)"
+ echo "		-l|--logdir <dir> Save logs on the <dir>"
+ echo "		            If <dir> is -, all logs output in console only"
+@@ -112,6 +113,10 @@ parse_opts() { # opts
+       UNSUPPORTED_RESULT=1
+       shift 1
+     ;;
++    --fail-unresolved)
++      UNRESOLVED_RESULT=1
++      shift 1
++    ;;
+     --logdir|-l)
+       LOG_DIR=$2
+       shift 2
+@@ -176,6 +181,7 @@ KEEP_LOG=0
+ DEBUG=0
+ VERBOSE=0
+ UNSUPPORTED_RESULT=0
++UNRESOLVED_RESULT=0
+ STOP_FAILURE=0
+ # Parse command-line options
+ parse_opts $*
+@@ -280,7 +286,7 @@ eval_result() { # sigval
+     $UNRESOLVED)
+       prlog "	[${color_blue}UNRESOLVED${color_reset}]"
+       UNRESOLVED_CASES="$UNRESOLVED_CASES $CASENO"
+-      return 1 # this is a kind of bug.. something happened.
++      return $UNRESOLVED_RESULT # depends on use case
+     ;;
+     $UNTESTED)
+       prlog "	[${color_blue}UNTESTED${color_reset}]"
+diff --git a/tools/testing/selftests/kvm/Makefile b/tools/testing/selftests/kvm/Makefile
+index d91c53b726e6..75dec268787f 100644
+--- a/tools/testing/selftests/kvm/Makefile
++++ b/tools/testing/selftests/kvm/Makefile
+@@ -5,8 +5,34 @@ all:
+ 
+ top_srcdir = ../../../..
+ KSFT_KHDR_INSTALL := 1
++
++# For cross-builds to work, UNAME_M has to map to ARCH and arch specific
++# directories and targets in this Makefile. "uname -m" doesn't map to
++# arch specific sub-directory names.
++#
++# UNAME_M variable to used to run the compiles pointing to the right arch
++# directories and build the right targets for these supported architectures.
++#
++# TEST_GEN_PROGS and LIBKVM are set using UNAME_M variable.
++# LINUX_TOOL_ARCH_INCLUDE is set using ARCH variable.
++#
++# x86_64 targets are named to include x86_64 as a suffix and directories
++# for includes are in x86_64 sub-directory. s390x and aarch64 follow the
++# same convention. "uname -m" doesn't result in the correct mapping for
++# s390x and aarch64.
++#
++# No change necessary for x86_64
+ UNAME_M := $(shell uname -m)
+ 
++# Set UNAME_M for arm64 compile/install to work
++ifeq ($(ARCH),arm64)
++	UNAME_M := aarch64
++endif
++# Set UNAME_M s390x compile/install to work
++ifeq ($(ARCH),s390)
++	UNAME_M := s390x
++endif
++
+ LIBKVM = lib/assert.c lib/elf.c lib/io.c lib/kvm_util.c lib/sparsebit.c
+ LIBKVM_x86_64 = lib/x86_64/processor.c lib/x86_64/vmx.c lib/x86_64/svm.c lib/x86_64/ucall.c
+ LIBKVM_aarch64 = lib/aarch64/processor.c lib/aarch64/ucall.c
+@@ -47,7 +73,7 @@ LIBKVM += $(LIBKVM_$(UNAME_M))
+ INSTALL_HDR_PATH = $(top_srcdir)/usr
+ LINUX_HDR_PATH = $(INSTALL_HDR_PATH)/include/
+ LINUX_TOOL_INCLUDE = $(top_srcdir)/tools/include
+-LINUX_TOOL_ARCH_INCLUDE = $(top_srcdir)/tools/arch/x86/include
++LINUX_TOOL_ARCH_INCLUDE = $(top_srcdir)/tools/arch/$(ARCH)/include
+ CFLAGS += -Wall -Wstrict-prototypes -Wuninitialized -O2 -g -std=gnu99 \
+ 	-fno-stack-protector -fno-PIE -I$(LINUX_TOOL_INCLUDE) \
+ 	-I$(LINUX_TOOL_ARCH_INCLUDE) -I$(LINUX_HDR_PATH) -Iinclude \
+@@ -78,6 +104,7 @@ $(LIBKVM_OBJ): $(OUTPUT)/%.o: %.c
+ $(OUTPUT)/libkvm.a: $(LIBKVM_OBJ)
+ 	$(AR) crs $@ $^
+ 
++x := $(shell mkdir -p $(sort $(dir $(TEST_GEN_PROGS))))
+ all: $(STATIC_LIBS)
+ $(TEST_GEN_PROGS): $(STATIC_LIBS)
+ 
+diff --git a/tools/testing/selftests/kvm/include/evmcs.h b/tools/testing/selftests/kvm/include/evmcs.h
+index 4912d23844bc..e31ac9c5ead0 100644
+--- a/tools/testing/selftests/kvm/include/evmcs.h
++++ b/tools/testing/selftests/kvm/include/evmcs.h
+@@ -217,8 +217,8 @@ struct hv_enlightened_vmcs {
+ #define HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_MASK	\
+ 		(~((1ull << HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_SHIFT) - 1))
+ 
+-struct hv_enlightened_vmcs *current_evmcs;
+-struct hv_vp_assist_page *current_vp_assist;
++extern struct hv_enlightened_vmcs *current_evmcs;
++extern struct hv_vp_assist_page *current_vp_assist;
+ 
+ int vcpu_enable_evmcs(struct kvm_vm *vm, int vcpu_id);
+ 
+diff --git a/tools/testing/selftests/kvm/lib/x86_64/vmx.c b/tools/testing/selftests/kvm/lib/x86_64/vmx.c
+index 7aaa99ca4dbc..ce528f3cf093 100644
+--- a/tools/testing/selftests/kvm/lib/x86_64/vmx.c
++++ b/tools/testing/selftests/kvm/lib/x86_64/vmx.c
+@@ -17,6 +17,9 @@
+ 
+ bool enable_evmcs;
+ 
++struct hv_enlightened_vmcs *current_evmcs;
++struct hv_vp_assist_page *current_vp_assist;
++
+ struct eptPageTableEntry {
+ 	uint64_t readable:1;
+ 	uint64_t writable:1;


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [gentoo-commits] proj/linux-patches:5.6 commit in: /
@ 2020-06-03 11:44 Mike Pagano
  0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2020-06-03 11:44 UTC (permalink / raw
  To: gentoo-commits

commit:     17359e12cf00380c018b7bb205be730390d8db4c
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Jun  3 11:44:41 2020 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Jun  3 11:44:41 2020 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=17359e12

Linux patch 5.6.16

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1015_linux-5.6.16.patch | 7460 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 7464 insertions(+)

diff --git a/0000_README b/0000_README
index 1c0ea04..eb1d2c7 100644
--- a/0000_README
+++ b/0000_README
@@ -103,6 +103,10 @@ Patch:  1014_linux-5.6.15.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.6.15
 
+Patch:  1015_linux-5.6.16.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.6.16
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1015_linux-5.6.16.patch b/1015_linux-5.6.16.patch
new file mode 100644
index 0000000..d0d4b81
--- /dev/null
+++ b/1015_linux-5.6.16.patch
@@ -0,0 +1,7460 @@
+diff --git a/Makefile b/Makefile
+index 3eca0c523098..1befb37dcc58 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 6
+-SUBLEVEL = 15
++SUBLEVEL = 16
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+ 
+diff --git a/arch/arm/boot/compressed/vmlinux.lds.S b/arch/arm/boot/compressed/vmlinux.lds.S
+index fc7ed03d8b93..51b078604978 100644
+--- a/arch/arm/boot/compressed/vmlinux.lds.S
++++ b/arch/arm/boot/compressed/vmlinux.lds.S
+@@ -43,7 +43,7 @@ SECTIONS
+   }
+   .table : ALIGN(4) {
+     _table_start = .;
+-    LONG(ZIMAGE_MAGIC(2))
++    LONG(ZIMAGE_MAGIC(4))
+     LONG(ZIMAGE_MAGIC(0x5a534c4b))
+     LONG(ZIMAGE_MAGIC(__piggy_size_addr - _start))
+     LONG(ZIMAGE_MAGIC(_kernel_bss_size))
+diff --git a/arch/arm/boot/dts/bcm-hr2.dtsi b/arch/arm/boot/dts/bcm-hr2.dtsi
+index 6142c672811e..5e5f5ca3c86f 100644
+--- a/arch/arm/boot/dts/bcm-hr2.dtsi
++++ b/arch/arm/boot/dts/bcm-hr2.dtsi
+@@ -75,7 +75,7 @@
+ 		timer@20200 {
+ 			compatible = "arm,cortex-a9-global-timer";
+ 			reg = <0x20200 0x100>;
+-			interrupts = <GIC_PPI 11 IRQ_TYPE_LEVEL_HIGH>;
++			interrupts = <GIC_PPI 11 IRQ_TYPE_EDGE_RISING>;
+ 			clocks = <&periph_clk>;
+ 		};
+ 
+@@ -83,7 +83,7 @@
+ 			compatible = "arm,cortex-a9-twd-timer";
+ 			reg = <0x20600 0x20>;
+ 			interrupts = <GIC_PPI 13 (GIC_CPU_MASK_SIMPLE(1) |
+-						  IRQ_TYPE_LEVEL_HIGH)>;
++						  IRQ_TYPE_EDGE_RISING)>;
+ 			clocks = <&periph_clk>;
+ 		};
+ 
+@@ -91,7 +91,7 @@
+ 			compatible = "arm,cortex-a9-twd-wdt";
+ 			reg = <0x20620 0x20>;
+ 			interrupts = <GIC_PPI 14 (GIC_CPU_MASK_SIMPLE(1) |
+-						  IRQ_TYPE_LEVEL_HIGH)>;
++						  IRQ_TYPE_EDGE_RISING)>;
+ 			clocks = <&periph_clk>;
+ 		};
+ 
+diff --git a/arch/arm/boot/dts/bcm2835-rpi-zero-w.dts b/arch/arm/boot/dts/bcm2835-rpi-zero-w.dts
+index 4c3f606e5b8d..f65448c01e31 100644
+--- a/arch/arm/boot/dts/bcm2835-rpi-zero-w.dts
++++ b/arch/arm/boot/dts/bcm2835-rpi-zero-w.dts
+@@ -24,7 +24,7 @@
+ 
+ 	leds {
+ 		act {
+-			gpios = <&gpio 47 GPIO_ACTIVE_HIGH>;
++			gpios = <&gpio 47 GPIO_ACTIVE_LOW>;
+ 		};
+ 	};
+ 
+diff --git a/arch/arm/boot/dts/imx6q-b450v3.dts b/arch/arm/boot/dts/imx6q-b450v3.dts
+index 95b8f2d71821..fb0980190aa0 100644
+--- a/arch/arm/boot/dts/imx6q-b450v3.dts
++++ b/arch/arm/boot/dts/imx6q-b450v3.dts
+@@ -65,13 +65,6 @@
+ 	};
+ };
+ 
+-&clks {
+-	assigned-clocks = <&clks IMX6QDL_CLK_LDB_DI0_SEL>,
+-			  <&clks IMX6QDL_CLK_LDB_DI1_SEL>;
+-	assigned-clock-parents = <&clks IMX6QDL_CLK_PLL3_USB_OTG>,
+-				 <&clks IMX6QDL_CLK_PLL3_USB_OTG>;
+-};
+-
+ &ldb {
+ 	status = "okay";
+ 
+diff --git a/arch/arm/boot/dts/imx6q-b650v3.dts b/arch/arm/boot/dts/imx6q-b650v3.dts
+index 611cb7ae7e55..8f762d9c5ae9 100644
+--- a/arch/arm/boot/dts/imx6q-b650v3.dts
++++ b/arch/arm/boot/dts/imx6q-b650v3.dts
+@@ -65,13 +65,6 @@
+ 	};
+ };
+ 
+-&clks {
+-	assigned-clocks = <&clks IMX6QDL_CLK_LDB_DI0_SEL>,
+-			  <&clks IMX6QDL_CLK_LDB_DI1_SEL>;
+-	assigned-clock-parents = <&clks IMX6QDL_CLK_PLL3_USB_OTG>,
+-				 <&clks IMX6QDL_CLK_PLL3_USB_OTG>;
+-};
+-
+ &ldb {
+ 	status = "okay";
+ 
+diff --git a/arch/arm/boot/dts/imx6q-b850v3.dts b/arch/arm/boot/dts/imx6q-b850v3.dts
+index e4cb118f88c6..1ea64ecf4291 100644
+--- a/arch/arm/boot/dts/imx6q-b850v3.dts
++++ b/arch/arm/boot/dts/imx6q-b850v3.dts
+@@ -53,17 +53,6 @@
+ 	};
+ };
+ 
+-&clks {
+-	assigned-clocks = <&clks IMX6QDL_CLK_LDB_DI0_SEL>,
+-			  <&clks IMX6QDL_CLK_LDB_DI1_SEL>,
+-			  <&clks IMX6QDL_CLK_IPU1_DI0_PRE_SEL>,
+-			  <&clks IMX6QDL_CLK_IPU2_DI0_PRE_SEL>;
+-	assigned-clock-parents = <&clks IMX6QDL_CLK_PLL5_VIDEO_DIV>,
+-				 <&clks IMX6QDL_CLK_PLL5_VIDEO_DIV>,
+-				 <&clks IMX6QDL_CLK_PLL2_PFD2_396M>,
+-				 <&clks IMX6QDL_CLK_PLL2_PFD2_396M>;
+-};
+-
+ &ldb {
+ 	fsl,dual-channel;
+ 	status = "okay";
+diff --git a/arch/arm/boot/dts/imx6q-bx50v3.dtsi b/arch/arm/boot/dts/imx6q-bx50v3.dtsi
+index fa27dcdf06f1..1938b04199c4 100644
+--- a/arch/arm/boot/dts/imx6q-bx50v3.dtsi
++++ b/arch/arm/boot/dts/imx6q-bx50v3.dtsi
+@@ -377,3 +377,18 @@
+ 		#interrupt-cells = <1>;
+ 	};
+ };
++
++&clks {
++	assigned-clocks = <&clks IMX6QDL_CLK_LDB_DI0_SEL>,
++			  <&clks IMX6QDL_CLK_LDB_DI1_SEL>,
++			  <&clks IMX6QDL_CLK_IPU1_DI0_PRE_SEL>,
++			  <&clks IMX6QDL_CLK_IPU1_DI1_PRE_SEL>,
++			  <&clks IMX6QDL_CLK_IPU2_DI0_PRE_SEL>,
++			  <&clks IMX6QDL_CLK_IPU2_DI1_PRE_SEL>;
++	assigned-clock-parents = <&clks IMX6QDL_CLK_PLL5_VIDEO_DIV>,
++				 <&clks IMX6QDL_CLK_PLL5_VIDEO_DIV>,
++				 <&clks IMX6QDL_CLK_PLL2_PFD0_352M>,
++				 <&clks IMX6QDL_CLK_PLL2_PFD0_352M>,
++				 <&clks IMX6QDL_CLK_PLL2_PFD0_352M>,
++				 <&clks IMX6QDL_CLK_PLL2_PFD0_352M>;
++};
+diff --git a/arch/arm/boot/dts/mmp3-dell-ariel.dts b/arch/arm/boot/dts/mmp3-dell-ariel.dts
+index 15449c72c042..b0ec14c42164 100644
+--- a/arch/arm/boot/dts/mmp3-dell-ariel.dts
++++ b/arch/arm/boot/dts/mmp3-dell-ariel.dts
+@@ -98,19 +98,19 @@
+ 	status = "okay";
+ };
+ 
+-&ssp3 {
++&ssp1 {
+ 	status = "okay";
+-	cs-gpios = <&gpio 46 GPIO_ACTIVE_HIGH>;
++	cs-gpios = <&gpio 46 GPIO_ACTIVE_LOW>;
+ 
+ 	firmware-flash@0 {
+-		compatible = "st,m25p80", "jedec,spi-nor";
++		compatible = "winbond,w25q32", "jedec,spi-nor";
+ 		reg = <0>;
+-		spi-max-frequency = <40000000>;
++		spi-max-frequency = <104000000>;
+ 		m25p,fast-read;
+ 	};
+ };
+ 
+-&ssp4 {
+-	cs-gpios = <&gpio 56 GPIO_ACTIVE_HIGH>;
++&ssp2 {
++	cs-gpios = <&gpio 56 GPIO_ACTIVE_LOW>;
+ 	status = "okay";
+ };
+diff --git a/arch/arm/boot/dts/mmp3.dtsi b/arch/arm/boot/dts/mmp3.dtsi
+index 59a108e49b41..1e25bf998ab5 100644
+--- a/arch/arm/boot/dts/mmp3.dtsi
++++ b/arch/arm/boot/dts/mmp3.dtsi
+@@ -202,8 +202,7 @@
+ 			};
+ 
+ 			hsic_phy0: hsic-phy@f0001800 {
+-				compatible = "marvell,mmp3-hsic-phy",
+-					     "usb-nop-xceiv";
++				compatible = "marvell,mmp3-hsic-phy";
+ 				reg = <0xf0001800 0x40>;
+ 				#phy-cells = <0>;
+ 				status = "disabled";
+@@ -224,8 +223,7 @@
+ 			};
+ 
+ 			hsic_phy1: hsic-phy@f0002800 {
+-				compatible = "marvell,mmp3-hsic-phy",
+-					     "usb-nop-xceiv";
++				compatible = "marvell,mmp3-hsic-phy";
+ 				reg = <0xf0002800 0x40>;
+ 				#phy-cells = <0>;
+ 				status = "disabled";
+@@ -531,7 +529,7 @@
+ 		};
+ 
+ 		soc_clocks: clocks@d4050000 {
+-			compatible = "marvell,mmp2-clock";
++			compatible = "marvell,mmp3-clock";
+ 			reg = <0xd4050000 0x1000>,
+ 			      <0xd4282800 0x400>,
+ 			      <0xd4015000 0x1000>;
+diff --git a/arch/arm/boot/dts/motorola-mapphone-common.dtsi b/arch/arm/boot/dts/motorola-mapphone-common.dtsi
+index 9067e0ef4240..06fbffa81636 100644
+--- a/arch/arm/boot/dts/motorola-mapphone-common.dtsi
++++ b/arch/arm/boot/dts/motorola-mapphone-common.dtsi
+@@ -367,6 +367,8 @@
+ };
+ 
+ &mmc3 {
++	pinctrl-names = "default";
++	pinctrl-0 = <&mmc3_pins>;
+ 	vmmc-supply = <&wl12xx_vmmc>;
+ 	/* uart2_tx.sdmmc3_dat1 pad as wakeirq */
+ 	interrupts-extended = <&wakeupgen GIC_SPI 94 IRQ_TYPE_LEVEL_HIGH
+@@ -472,6 +474,37 @@
+ 		>;
+ 	};
+ 
++	/*
++	 * Android uses PIN_OFF_INPUT_PULLDOWN | PIN_INPUT_PULLUP | MUX_MODE3
++	 * for gpio_100, but the internal pull makes wlan flakey on some
++	 * devices. Off mode value should be tested if we have off mode working
++	 * later on.
++	 */
++	mmc3_pins: pinmux_mmc3_pins {
++		pinctrl-single,pins = <
++		/* 0x4a10008e gpmc_wait2.gpio_100 d23 */
++		OMAP4_IOPAD(0x08e, PIN_INPUT | MUX_MODE3)
++
++		/* 0x4a100102 abe_mcbsp1_dx.sdmmc3_dat2 ab25 */
++		OMAP4_IOPAD(0x102, PIN_INPUT_PULLUP | MUX_MODE1)
++
++		/* 0x4a100104 abe_mcbsp1_fsx.sdmmc3_dat3 ac27 */
++		OMAP4_IOPAD(0x104, PIN_INPUT_PULLUP | MUX_MODE1)
++
++		/* 0x4a100118 uart2_cts.sdmmc3_clk ab26 */
++		OMAP4_IOPAD(0x118, PIN_INPUT | MUX_MODE1)
++
++		/* 0x4a10011a uart2_rts.sdmmc3_cmd ab27 */
++		OMAP4_IOPAD(0x11a, PIN_INPUT_PULLUP | MUX_MODE1)
++
++		/* 0x4a10011c uart2_rx.sdmmc3_dat0 aa25 */
++		OMAP4_IOPAD(0x11c, PIN_INPUT_PULLUP | MUX_MODE1)
++
++		/* 0x4a10011e uart2_tx.sdmmc3_dat1 aa26 */
++		OMAP4_IOPAD(0x11e, PIN_INPUT_PULLUP | MUX_MODE1)
++		>;
++	};
++
+ 	/* gpmc_ncs0.gpio_50 */
+ 	poweroff_gpio: pinmux_poweroff_pins {
+ 		pinctrl-single,pins = <
+@@ -690,14 +723,18 @@
+ };
+ 
+ /*
+- * As uart1 is wired to mdm6600 with rts and cts, we can use the cts pin for
+- * uart1 wakeirq.
++ * The uart1 port is wired to mdm6600 with rts and cts. The modem uses gpio_149
++ * for wake-up events for both the USB PHY and the UART. We can use gpio_149
++ * pad as the shared wakeirq for the UART rather than the RX or CTS pad as we
++ * have gpio_149 trigger before the UART transfer starts.
+  */
+ &uart1 {
+ 	pinctrl-names = "default";
+ 	pinctrl-0 = <&uart1_pins>;
+ 	interrupts-extended = <&wakeupgen GIC_SPI 72 IRQ_TYPE_LEVEL_HIGH
+-			       &omap4_pmx_core 0xfc>;
++			       &omap4_pmx_core 0x110>;
++	uart-has-rtscts;
++	current-speed = <115200>;
+ };
+ 
+ &uart3 {
+diff --git a/arch/arm/boot/dts/rk3036.dtsi b/arch/arm/boot/dts/rk3036.dtsi
+index cf36e25195b4..8c4b8f56c9e0 100644
+--- a/arch/arm/boot/dts/rk3036.dtsi
++++ b/arch/arm/boot/dts/rk3036.dtsi
+@@ -128,7 +128,7 @@
+ 		assigned-clocks = <&cru SCLK_GPU>;
+ 		assigned-clock-rates = <100000000>;
+ 		clocks = <&cru SCLK_GPU>, <&cru SCLK_GPU>;
+-		clock-names = "core", "bus";
++		clock-names = "bus", "core";
+ 		resets = <&cru SRST_GPU>;
+ 		status = "disabled";
+ 	};
+diff --git a/arch/arm/boot/dts/rk3228-evb.dts b/arch/arm/boot/dts/rk3228-evb.dts
+index 5670b33fd1bd..aed879db6c15 100644
+--- a/arch/arm/boot/dts/rk3228-evb.dts
++++ b/arch/arm/boot/dts/rk3228-evb.dts
+@@ -46,7 +46,7 @@
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+ 
+-		phy: phy@0 {
++		phy: ethernet-phy@0 {
+ 			compatible = "ethernet-phy-id1234.d400", "ethernet-phy-ieee802.3-c22";
+ 			reg = <0>;
+ 			clocks = <&cru SCLK_MAC_PHY>;
+diff --git a/arch/arm/boot/dts/rk3229-xms6.dts b/arch/arm/boot/dts/rk3229-xms6.dts
+index 679fc2b00e5a..933ef69da32a 100644
+--- a/arch/arm/boot/dts/rk3229-xms6.dts
++++ b/arch/arm/boot/dts/rk3229-xms6.dts
+@@ -150,7 +150,7 @@
+ 		#address-cells = <1>;
+ 		#size-cells = <0>;
+ 
+-		phy: phy@0 {
++		phy: ethernet-phy@0 {
+ 			compatible = "ethernet-phy-id1234.d400",
+ 			             "ethernet-phy-ieee802.3-c22";
+ 			reg = <0>;
+diff --git a/arch/arm/boot/dts/rk322x.dtsi b/arch/arm/boot/dts/rk322x.dtsi
+index 4e90efdc9630..a83f65486ad4 100644
+--- a/arch/arm/boot/dts/rk322x.dtsi
++++ b/arch/arm/boot/dts/rk322x.dtsi
+@@ -561,7 +561,7 @@
+ 				  "pp1",
+ 				  "ppmmu1";
+ 		clocks = <&cru ACLK_GPU>, <&cru ACLK_GPU>;
+-		clock-names = "core", "bus";
++		clock-names = "bus", "core";
+ 		resets = <&cru SRST_GPU_A>;
+ 		status = "disabled";
+ 	};
+@@ -1033,7 +1033,7 @@
+ 			};
+ 		};
+ 
+-		spi-0 {
++		spi0 {
+ 			spi0_clk: spi0-clk {
+ 				rockchip,pins = <0 RK_PB1 2 &pcfg_pull_up>;
+ 			};
+@@ -1051,7 +1051,7 @@
+ 			};
+ 		};
+ 
+-		spi-1 {
++		spi1 {
+ 			spi1_clk: spi1-clk {
+ 				rockchip,pins = <0 RK_PC7 2 &pcfg_pull_up>;
+ 			};
+diff --git a/arch/arm/boot/dts/rk3xxx.dtsi b/arch/arm/boot/dts/rk3xxx.dtsi
+index 241f43e29c77..bb5ff10b9110 100644
+--- a/arch/arm/boot/dts/rk3xxx.dtsi
++++ b/arch/arm/boot/dts/rk3xxx.dtsi
+@@ -84,7 +84,7 @@
+ 		compatible = "arm,mali-400";
+ 		reg = <0x10090000 0x10000>;
+ 		clocks = <&cru ACLK_GPU>, <&cru ACLK_GPU>;
+-		clock-names = "core", "bus";
++		clock-names = "bus", "core";
+ 		assigned-clocks = <&cru ACLK_GPU>;
+ 		assigned-clock-rates = <100000000>;
+ 		resets = <&cru SRST_GPU>;
+diff --git a/arch/arm/include/asm/assembler.h b/arch/arm/include/asm/assembler.h
+index 99929122dad7..3546d294d55f 100644
+--- a/arch/arm/include/asm/assembler.h
++++ b/arch/arm/include/asm/assembler.h
+@@ -18,11 +18,11 @@
+ #endif
+ 
+ #include <asm/ptrace.h>
+-#include <asm/domain.h>
+ #include <asm/opcodes-virt.h>
+ #include <asm/asm-offsets.h>
+ #include <asm/page.h>
+ #include <asm/thread_info.h>
++#include <asm/uaccess-asm.h>
+ 
+ #define IOMEM(x)	(x)
+ 
+@@ -446,79 +446,6 @@ THUMB(	orr	\reg , \reg , #PSR_T_BIT	)
+ 	.size \name , . - \name
+ 	.endm
+ 
+-	.macro	csdb
+-#ifdef CONFIG_THUMB2_KERNEL
+-	.inst.w	0xf3af8014
+-#else
+-	.inst	0xe320f014
+-#endif
+-	.endm
+-
+-	.macro check_uaccess, addr:req, size:req, limit:req, tmp:req, bad:req
+-#ifndef CONFIG_CPU_USE_DOMAINS
+-	adds	\tmp, \addr, #\size - 1
+-	sbcscc	\tmp, \tmp, \limit
+-	bcs	\bad
+-#ifdef CONFIG_CPU_SPECTRE
+-	movcs	\addr, #0
+-	csdb
+-#endif
+-#endif
+-	.endm
+-
+-	.macro uaccess_mask_range_ptr, addr:req, size:req, limit:req, tmp:req
+-#ifdef CONFIG_CPU_SPECTRE
+-	sub	\tmp, \limit, #1
+-	subs	\tmp, \tmp, \addr	@ tmp = limit - 1 - addr
+-	addhs	\tmp, \tmp, #1		@ if (tmp >= 0) {
+-	subshs	\tmp, \tmp, \size	@ tmp = limit - (addr + size) }
+-	movlo	\addr, #0		@ if (tmp < 0) addr = NULL
+-	csdb
+-#endif
+-	.endm
+-
+-	.macro	uaccess_disable, tmp, isb=1
+-#ifdef CONFIG_CPU_SW_DOMAIN_PAN
+-	/*
+-	 * Whenever we re-enter userspace, the domains should always be
+-	 * set appropriately.
+-	 */
+-	mov	\tmp, #DACR_UACCESS_DISABLE
+-	mcr	p15, 0, \tmp, c3, c0, 0		@ Set domain register
+-	.if	\isb
+-	instr_sync
+-	.endif
+-#endif
+-	.endm
+-
+-	.macro	uaccess_enable, tmp, isb=1
+-#ifdef CONFIG_CPU_SW_DOMAIN_PAN
+-	/*
+-	 * Whenever we re-enter userspace, the domains should always be
+-	 * set appropriately.
+-	 */
+-	mov	\tmp, #DACR_UACCESS_ENABLE
+-	mcr	p15, 0, \tmp, c3, c0, 0
+-	.if	\isb
+-	instr_sync
+-	.endif
+-#endif
+-	.endm
+-
+-	.macro	uaccess_save, tmp
+-#ifdef CONFIG_CPU_SW_DOMAIN_PAN
+-	mrc	p15, 0, \tmp, c3, c0, 0
+-	str	\tmp, [sp, #SVC_DACR]
+-#endif
+-	.endm
+-
+-	.macro	uaccess_restore
+-#ifdef CONFIG_CPU_SW_DOMAIN_PAN
+-	ldr	r0, [sp, #SVC_DACR]
+-	mcr	p15, 0, r0, c3, c0, 0
+-#endif
+-	.endm
+-
+ 	.irp	c,,eq,ne,cs,cc,mi,pl,vs,vc,hi,ls,ge,lt,gt,le,hs,lo
+ 	.macro	ret\c, reg
+ #if __LINUX_ARM_ARCH__ < 6
+diff --git a/arch/arm/include/asm/uaccess-asm.h b/arch/arm/include/asm/uaccess-asm.h
+new file mode 100644
+index 000000000000..907571fd05c6
+--- /dev/null
++++ b/arch/arm/include/asm/uaccess-asm.h
+@@ -0,0 +1,117 @@
++/* SPDX-License-Identifier: GPL-2.0-only */
++
++#ifndef __ASM_UACCESS_ASM_H__
++#define __ASM_UACCESS_ASM_H__
++
++#include <asm/asm-offsets.h>
++#include <asm/domain.h>
++#include <asm/memory.h>
++#include <asm/thread_info.h>
++
++	.macro	csdb
++#ifdef CONFIG_THUMB2_KERNEL
++	.inst.w	0xf3af8014
++#else
++	.inst	0xe320f014
++#endif
++	.endm
++
++	.macro check_uaccess, addr:req, size:req, limit:req, tmp:req, bad:req
++#ifndef CONFIG_CPU_USE_DOMAINS
++	adds	\tmp, \addr, #\size - 1
++	sbcscc	\tmp, \tmp, \limit
++	bcs	\bad
++#ifdef CONFIG_CPU_SPECTRE
++	movcs	\addr, #0
++	csdb
++#endif
++#endif
++	.endm
++
++	.macro uaccess_mask_range_ptr, addr:req, size:req, limit:req, tmp:req
++#ifdef CONFIG_CPU_SPECTRE
++	sub	\tmp, \limit, #1
++	subs	\tmp, \tmp, \addr	@ tmp = limit - 1 - addr
++	addhs	\tmp, \tmp, #1		@ if (tmp >= 0) {
++	subshs	\tmp, \tmp, \size	@ tmp = limit - (addr + size) }
++	movlo	\addr, #0		@ if (tmp < 0) addr = NULL
++	csdb
++#endif
++	.endm
++
++	.macro	uaccess_disable, tmp, isb=1
++#ifdef CONFIG_CPU_SW_DOMAIN_PAN
++	/*
++	 * Whenever we re-enter userspace, the domains should always be
++	 * set appropriately.
++	 */
++	mov	\tmp, #DACR_UACCESS_DISABLE
++	mcr	p15, 0, \tmp, c3, c0, 0		@ Set domain register
++	.if	\isb
++	instr_sync
++	.endif
++#endif
++	.endm
++
++	.macro	uaccess_enable, tmp, isb=1
++#ifdef CONFIG_CPU_SW_DOMAIN_PAN
++	/*
++	 * Whenever we re-enter userspace, the domains should always be
++	 * set appropriately.
++	 */
++	mov	\tmp, #DACR_UACCESS_ENABLE
++	mcr	p15, 0, \tmp, c3, c0, 0
++	.if	\isb
++	instr_sync
++	.endif
++#endif
++	.endm
++
++#if defined(CONFIG_CPU_SW_DOMAIN_PAN) || defined(CONFIG_CPU_USE_DOMAINS)
++#define DACR(x...)	x
++#else
++#define DACR(x...)
++#endif
++
++	/*
++	 * Save the address limit on entry to a privileged exception.
++	 *
++	 * If we are using the DACR for kernel access by the user accessors
++	 * (CONFIG_CPU_USE_DOMAINS=y), always reset the DACR kernel domain
++	 * back to client mode, whether or not \disable is set.
++	 *
++	 * If we are using SW PAN, set the DACR user domain to no access
++	 * if \disable is set.
++	 */
++	.macro	uaccess_entry, tsk, tmp0, tmp1, tmp2, disable
++	ldr	\tmp1, [\tsk, #TI_ADDR_LIMIT]
++	mov	\tmp2, #TASK_SIZE
++	str	\tmp2, [\tsk, #TI_ADDR_LIMIT]
++ DACR(	mrc	p15, 0, \tmp0, c3, c0, 0)
++ DACR(	str	\tmp0, [sp, #SVC_DACR])
++	str	\tmp1, [sp, #SVC_ADDR_LIMIT]
++	.if \disable && IS_ENABLED(CONFIG_CPU_SW_DOMAIN_PAN)
++	/* kernel=client, user=no access */
++	mov	\tmp2, #DACR_UACCESS_DISABLE
++	mcr	p15, 0, \tmp2, c3, c0, 0
++	instr_sync
++	.elseif IS_ENABLED(CONFIG_CPU_USE_DOMAINS)
++	/* kernel=client */
++	bic	\tmp2, \tmp0, #domain_mask(DOMAIN_KERNEL)
++	orr	\tmp2, \tmp2, #domain_val(DOMAIN_KERNEL, DOMAIN_CLIENT)
++	mcr	p15, 0, \tmp2, c3, c0, 0
++	instr_sync
++	.endif
++	.endm
++
++	/* Restore the user access state previously saved by uaccess_entry */
++	.macro	uaccess_exit, tsk, tmp0, tmp1
++	ldr	\tmp1, [sp, #SVC_ADDR_LIMIT]
++ DACR(	ldr	\tmp0, [sp, #SVC_DACR])
++	str	\tmp1, [\tsk, #TI_ADDR_LIMIT]
++ DACR(	mcr	p15, 0, \tmp0, c3, c0, 0)
++	.endm
++
++#undef DACR
++
++#endif /* __ASM_UACCESS_ASM_H__ */
+diff --git a/arch/arm/kernel/entry-armv.S b/arch/arm/kernel/entry-armv.S
+index 77f54830554c..55a47df04773 100644
+--- a/arch/arm/kernel/entry-armv.S
++++ b/arch/arm/kernel/entry-armv.S
+@@ -27,6 +27,7 @@
+ #include <asm/unistd.h>
+ #include <asm/tls.h>
+ #include <asm/system_info.h>
++#include <asm/uaccess-asm.h>
+ 
+ #include "entry-header.S"
+ #include <asm/entry-macro-multi.S>
+@@ -179,15 +180,7 @@ ENDPROC(__und_invalid)
+ 	stmia	r7, {r2 - r6}
+ 
+ 	get_thread_info tsk
+-	ldr	r0, [tsk, #TI_ADDR_LIMIT]
+-	mov	r1, #TASK_SIZE
+-	str	r1, [tsk, #TI_ADDR_LIMIT]
+-	str	r0, [sp, #SVC_ADDR_LIMIT]
+-
+-	uaccess_save r0
+-	.if \uaccess
+-	uaccess_disable r0
+-	.endif
++	uaccess_entry tsk, r0, r1, r2, \uaccess
+ 
+ 	.if \trace
+ #ifdef CONFIG_TRACE_IRQFLAGS
+diff --git a/arch/arm/kernel/entry-header.S b/arch/arm/kernel/entry-header.S
+index 32051ec5b33f..40db0f9188b6 100644
+--- a/arch/arm/kernel/entry-header.S
++++ b/arch/arm/kernel/entry-header.S
+@@ -6,6 +6,7 @@
+ #include <asm/asm-offsets.h>
+ #include <asm/errno.h>
+ #include <asm/thread_info.h>
++#include <asm/uaccess-asm.h>
+ #include <asm/v7m.h>
+ 
+ @ Bad Abort numbers
+@@ -217,9 +218,7 @@
+ 	blne	trace_hardirqs_off
+ #endif
+ 	.endif
+-	ldr	r1, [sp, #SVC_ADDR_LIMIT]
+-	uaccess_restore
+-	str	r1, [tsk, #TI_ADDR_LIMIT]
++	uaccess_exit tsk, r0, r1
+ 
+ #ifndef CONFIG_THUMB2_KERNEL
+ 	@ ARM mode SVC restore
+@@ -263,9 +262,7 @@
+ 	@ on the stack remains correct).
+ 	@
+ 	.macro  svc_exit_via_fiq
+-	ldr	r1, [sp, #SVC_ADDR_LIMIT]
+-	uaccess_restore
+-	str	r1, [tsk, #TI_ADDR_LIMIT]
++	uaccess_exit tsk, r0, r1
+ #ifndef CONFIG_THUMB2_KERNEL
+ 	@ ARM mode restore
+ 	mov	r0, sp
+diff --git a/arch/arm64/boot/dts/mediatek/mt8173.dtsi b/arch/arm64/boot/dts/mediatek/mt8173.dtsi
+index 8b4e806d5119..125c78321ab4 100644
+--- a/arch/arm64/boot/dts/mediatek/mt8173.dtsi
++++ b/arch/arm64/boot/dts/mediatek/mt8173.dtsi
+@@ -1401,8 +1401,8 @@
+ 				      "venc_lt_sel";
+ 			assigned-clocks = <&topckgen CLK_TOP_VENC_SEL>,
+ 					  <&topckgen CLK_TOP_VENC_LT_SEL>;
+-			assigned-clock-parents = <&topckgen CLK_TOP_VENCPLL_D2>,
+-						 <&topckgen CLK_TOP_UNIVPLL1_D2>;
++			assigned-clock-parents = <&topckgen CLK_TOP_VCODECPLL>,
++						 <&topckgen CLK_TOP_VCODECPLL_370P5>;
+ 		};
+ 
+ 		jpegdec: jpegdec@18004000 {
+diff --git a/arch/arm64/boot/dts/qcom/apq8096-db820c.dtsi b/arch/arm64/boot/dts/qcom/apq8096-db820c.dtsi
+index a85b85d85a5f..3c7c9b52623c 100644
+--- a/arch/arm64/boot/dts/qcom/apq8096-db820c.dtsi
++++ b/arch/arm64/boot/dts/qcom/apq8096-db820c.dtsi
+@@ -908,10 +908,27 @@
+ 	status = "okay";
+ };
+ 
++&q6asmdai {
++	dai@0 {
++		reg = <0>;
++	};
++
++	dai@1 {
++		reg = <1>;
++	};
++
++	dai@2 {
++		reg = <2>;
++	};
++};
++
+ &sound {
+ 	compatible = "qcom,apq8096-sndcard";
+ 	model = "DB820c";
+-	audio-routing =	"RX_BIAS", "MCLK";
++	audio-routing =	"RX_BIAS", "MCLK",
++		"MM_DL1",  "MultiMedia1 Playback",
++		"MM_DL2",  "MultiMedia2 Playback",
++		"MultiMedia3 Capture", "MM_UL3";
+ 
+ 	mm1-dai-link {
+ 		link-name = "MultiMedia1";
+diff --git a/arch/arm64/boot/dts/qcom/msm8996.dtsi b/arch/arm64/boot/dts/qcom/msm8996.dtsi
+index 7ae082ea14ea..f925a6c7d293 100644
+--- a/arch/arm64/boot/dts/qcom/msm8996.dtsi
++++ b/arch/arm64/boot/dts/qcom/msm8996.dtsi
+@@ -2053,6 +2053,8 @@
+ 						reg = <APR_SVC_ASM>;
+ 						q6asmdai: dais {
+ 							compatible = "qcom,q6asm-dais";
++							#address-cells = <1>;
++							#size-cells = <0>;
+ 							#sound-dai-cells = <1>;
+ 							iommus = <&lpass_q6_smmu 1>;
+ 						};
+diff --git a/arch/arm64/boot/dts/rockchip/rk3328-evb.dts b/arch/arm64/boot/dts/rockchip/rk3328-evb.dts
+index 6abc6f4a86cf..05265b38cc02 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3328-evb.dts
++++ b/arch/arm64/boot/dts/rockchip/rk3328-evb.dts
+@@ -86,7 +86,7 @@
+ 	assigned-clock-rate = <50000000>;
+ 	assigned-clocks = <&cru SCLK_MAC2PHY>;
+ 	assigned-clock-parents = <&cru SCLK_MAC2PHY_SRC>;
+-
++	status = "okay";
+ };
+ 
+ &i2c1 {
+diff --git a/arch/arm64/boot/dts/rockchip/rk3399.dtsi b/arch/arm64/boot/dts/rockchip/rk3399.dtsi
+index 5c4238a80144..c341172ec208 100644
+--- a/arch/arm64/boot/dts/rockchip/rk3399.dtsi
++++ b/arch/arm64/boot/dts/rockchip/rk3399.dtsi
+@@ -1890,10 +1890,10 @@
+ 	gpu: gpu@ff9a0000 {
+ 		compatible = "rockchip,rk3399-mali", "arm,mali-t860";
+ 		reg = <0x0 0xff9a0000 0x0 0x10000>;
+-		interrupts = <GIC_SPI 19 IRQ_TYPE_LEVEL_HIGH 0>,
+-			     <GIC_SPI 20 IRQ_TYPE_LEVEL_HIGH 0>,
+-			     <GIC_SPI 21 IRQ_TYPE_LEVEL_HIGH 0>;
+-		interrupt-names = "gpu", "job", "mmu";
++		interrupts = <GIC_SPI 20 IRQ_TYPE_LEVEL_HIGH 0>,
++			     <GIC_SPI 21 IRQ_TYPE_LEVEL_HIGH 0>,
++			     <GIC_SPI 19 IRQ_TYPE_LEVEL_HIGH 0>;
++		interrupt-names = "job", "mmu", "gpu";
+ 		clocks = <&cru ACLK_GPU>;
+ 		#cooling-cells = <2>;
+ 		power-domains = <&power RK3399_PD_GPU>;
+diff --git a/arch/csky/abiv1/inc/abi/entry.h b/arch/csky/abiv1/inc/abi/entry.h
+index 5056ebb902d1..61d94ec7dd16 100644
+--- a/arch/csky/abiv1/inc/abi/entry.h
++++ b/arch/csky/abiv1/inc/abi/entry.h
+@@ -167,8 +167,8 @@
+ 	 *   BA     Reserved  C   D   V
+ 	 */
+ 	cprcr	r6, cpcr30
+-	lsri	r6, 28
+-	lsli	r6, 28
++	lsri	r6, 29
++	lsli	r6, 29
+ 	addi	r6, 0xe
+ 	cpwcr	r6, cpcr30
+ 
+diff --git a/arch/csky/abiv2/inc/abi/entry.h b/arch/csky/abiv2/inc/abi/entry.h
+index 111973c6c713..9023828ede97 100644
+--- a/arch/csky/abiv2/inc/abi/entry.h
++++ b/arch/csky/abiv2/inc/abi/entry.h
+@@ -225,8 +225,8 @@
+ 	 */
+ 	mfcr	r6, cr<30, 15> /* Get MSA0 */
+ 2:
+-	lsri	r6, 28
+-	lsli	r6, 28
++	lsri	r6, 29
++	lsli	r6, 29
+ 	addi	r6, 0x1ce
+ 	mtcr	r6, cr<30, 15> /* Set MSA0 */
+ 
+diff --git a/arch/csky/include/asm/uaccess.h b/arch/csky/include/asm/uaccess.h
+index eaa1c3403a42..60f8a4112588 100644
+--- a/arch/csky/include/asm/uaccess.h
++++ b/arch/csky/include/asm/uaccess.h
+@@ -254,7 +254,7 @@ do {								\
+ 
+ extern int __get_user_bad(void);
+ 
+-#define __copy_user(to, from, n)			\
++#define ___copy_to_user(to, from, n)			\
+ do {							\
+ 	int w0, w1, w2, w3;				\
+ 	asm volatile(					\
+@@ -289,31 +289,34 @@ do {							\
+ 	"       subi    %0, 4           \n"		\
+ 	"       br      3b              \n"		\
+ 	"5:     cmpnei  %0, 0           \n"  /* 1B */   \
+-	"       bf      8f              \n"		\
++	"       bf      13f             \n"		\
+ 	"       ldb     %3, (%2, 0)     \n"		\
+ 	"6:     stb     %3, (%1, 0)     \n"		\
+ 	"       addi    %2,  1          \n"		\
+ 	"       addi    %1,  1          \n"		\
+ 	"       subi    %0,  1          \n"		\
+ 	"       br      5b              \n"		\
+-	"7:     br      8f              \n"		\
++	"7:     subi	%0,  4          \n"		\
++	"8:     subi	%0,  4          \n"		\
++	"12:    subi	%0,  4          \n"		\
++	"       br      13f             \n"		\
+ 	".section __ex_table, \"a\"     \n"		\
+ 	".align   2                     \n"		\
+-	".long    2b, 7b                \n"		\
+-	".long    9b, 7b                \n"		\
+-	".long   10b, 7b                \n"		\
++	".long    2b, 13f               \n"		\
++	".long    4b, 13f               \n"		\
++	".long    6b, 13f               \n"		\
++	".long    9b, 12b               \n"		\
++	".long   10b, 8b                \n"		\
+ 	".long   11b, 7b                \n"		\
+-	".long    4b, 7b                \n"		\
+-	".long    6b, 7b                \n"		\
+ 	".previous                      \n"		\
+-	"8:                             \n"		\
++	"13:                            \n"		\
+ 	: "=r"(n), "=r"(to), "=r"(from), "=r"(w0),	\
+ 	  "=r"(w1), "=r"(w2), "=r"(w3)			\
+ 	: "0"(n), "1"(to), "2"(from)			\
+ 	: "memory");					\
+ } while (0)
+ 
+-#define __copy_user_zeroing(to, from, n)		\
++#define ___copy_from_user(to, from, n)			\
+ do {							\
+ 	int tmp;					\
+ 	int nsave;					\
+@@ -356,22 +359,22 @@ do {							\
+ 	"       addi    %1,  1          \n"		\
+ 	"       subi    %0,  1          \n"		\
+ 	"       br      5b              \n"		\
+-	"8:     mov     %3, %0          \n"		\
+-	"       movi    %4, 0           \n"		\
+-	"9:     stb     %4, (%1, 0)     \n"		\
+-	"       addi    %1, 1           \n"		\
+-	"       subi    %3, 1           \n"		\
+-	"       cmpnei  %3, 0           \n"		\
+-	"       bt      9b              \n"		\
+-	"       br      7f              \n"		\
++	"8:     stw     %3, (%1, 0)     \n"		\
++	"       subi    %0, 4           \n"		\
++	"       bf      7f              \n"		\
++	"9:     subi    %0, 8           \n"		\
++	"       bf      7f              \n"		\
++	"13:    stw     %3, (%1, 8)     \n"		\
++	"       subi    %0, 12          \n"		\
++	"       bf      7f              \n"		\
+ 	".section __ex_table, \"a\"     \n"		\
+ 	".align   2                     \n"		\
+-	".long    2b, 8b                \n"		\
++	".long    2b, 7f                \n"		\
++	".long    4b, 7f                \n"		\
++	".long    6b, 7f                \n"		\
+ 	".long   10b, 8b                \n"		\
+-	".long   11b, 8b                \n"		\
+-	".long   12b, 8b                \n"		\
+-	".long    4b, 8b                \n"		\
+-	".long    6b, 8b                \n"		\
++	".long   11b, 9b                \n"		\
++	".long   12b,13b                \n"		\
+ 	".previous                      \n"		\
+ 	"7:                             \n"		\
+ 	: "=r"(n), "=r"(to), "=r"(from), "=r"(nsave),	\
+diff --git a/arch/csky/kernel/entry.S b/arch/csky/kernel/entry.S
+index 007706328000..9718388448a4 100644
+--- a/arch/csky/kernel/entry.S
++++ b/arch/csky/kernel/entry.S
+@@ -318,8 +318,6 @@ ENTRY(__switch_to)
+ 
+ 	mfcr	a2, psr			/* Save PSR value */
+ 	stw	a2, (a3, THREAD_SR)	/* Save PSR in task struct */
+-	bclri	a2, 6			/* Disable interrupts */
+-	mtcr	a2, psr
+ 
+ 	SAVE_SWITCH_STACK
+ 
+diff --git a/arch/csky/kernel/perf_callchain.c b/arch/csky/kernel/perf_callchain.c
+index e68ff375c8f8..ab55e98ee8f6 100644
+--- a/arch/csky/kernel/perf_callchain.c
++++ b/arch/csky/kernel/perf_callchain.c
+@@ -12,12 +12,17 @@ struct stackframe {
+ 
+ static int unwind_frame_kernel(struct stackframe *frame)
+ {
+-	if (kstack_end((void *)frame->fp))
++	unsigned long low = (unsigned long)task_stack_page(current);
++	unsigned long high = low + THREAD_SIZE;
++
++	if (unlikely(frame->fp < low || frame->fp > high))
+ 		return -EPERM;
+-	if (frame->fp & 0x3 || frame->fp < TASK_SIZE)
++
++	if (kstack_end((void *)frame->fp) || frame->fp & 0x3)
+ 		return -EPERM;
+ 
+ 	*frame = *(struct stackframe *)frame->fp;
++
+ 	if (__kernel_text_address(frame->lr)) {
+ 		int graph = 0;
+ 
+diff --git a/arch/csky/lib/usercopy.c b/arch/csky/lib/usercopy.c
+index 647a23986fb5..3c9bd645e643 100644
+--- a/arch/csky/lib/usercopy.c
++++ b/arch/csky/lib/usercopy.c
+@@ -7,10 +7,7 @@
+ unsigned long raw_copy_from_user(void *to, const void *from,
+ 			unsigned long n)
+ {
+-	if (access_ok(from, n))
+-		__copy_user_zeroing(to, from, n);
+-	else
+-		memset(to, 0, n);
++	___copy_from_user(to, from, n);
+ 	return n;
+ }
+ EXPORT_SYMBOL(raw_copy_from_user);
+@@ -18,8 +15,7 @@ EXPORT_SYMBOL(raw_copy_from_user);
+ unsigned long raw_copy_to_user(void *to, const void *from,
+ 			unsigned long n)
+ {
+-	if (access_ok(to, n))
+-		__copy_user(to, from, n);
++	___copy_to_user(to, from, n);
+ 	return n;
+ }
+ EXPORT_SYMBOL(raw_copy_to_user);
+diff --git a/arch/parisc/mm/init.c b/arch/parisc/mm/init.c
+index 5224fb38d766..01d7071b23f7 100644
+--- a/arch/parisc/mm/init.c
++++ b/arch/parisc/mm/init.c
+@@ -562,7 +562,7 @@ void __init mem_init(void)
+ 			> BITS_PER_LONG);
+ 
+ 	high_memory = __va((max_pfn << PAGE_SHIFT));
+-	set_max_mapnr(page_to_pfn(virt_to_page(high_memory - 1)) + 1);
++	set_max_mapnr(max_low_pfn);
+ 	memblock_free_all();
+ 
+ #ifdef CONFIG_PA11
+diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
+index b0fb42b0bf4b..35608b9feb14 100644
+--- a/arch/powerpc/Kconfig
++++ b/arch/powerpc/Kconfig
+@@ -125,6 +125,7 @@ config PPC
+ 	select ARCH_HAS_MMIOWB			if PPC64
+ 	select ARCH_HAS_PHYS_TO_DMA
+ 	select ARCH_HAS_PMEM_API
++	select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE
+ 	select ARCH_HAS_PTE_DEVMAP		if PPC_BOOK3S_64
+ 	select ARCH_HAS_PTE_SPECIAL
+ 	select ARCH_HAS_MEMBARRIER_CALLBACKS
+diff --git a/arch/riscv/Kconfig.socs b/arch/riscv/Kconfig.socs
+index a131174a0a77..f310ad8ffcf7 100644
+--- a/arch/riscv/Kconfig.socs
++++ b/arch/riscv/Kconfig.socs
+@@ -11,13 +11,14 @@ config SOC_SIFIVE
+ 	  This enables support for SiFive SoC platform hardware.
+ 
+ config SOC_VIRT
+-       bool "QEMU Virt Machine"
+-       select POWER_RESET_SYSCON
+-       select POWER_RESET_SYSCON_POWEROFF
+-       select GOLDFISH
+-       select RTC_DRV_GOLDFISH
+-       select SIFIVE_PLIC
+-       help
+-         This enables support for QEMU Virt Machine.
++	bool "QEMU Virt Machine"
++	select POWER_RESET
++	select POWER_RESET_SYSCON
++	select POWER_RESET_SYSCON_POWEROFF
++	select GOLDFISH
++	select RTC_DRV_GOLDFISH if RTC_CLASS
++	select SIFIVE_PLIC
++	help
++	  This enables support for QEMU Virt Machine.
+ 
+ endmenu
+diff --git a/arch/riscv/include/asm/mmio.h b/arch/riscv/include/asm/mmio.h
+index a2c809df2733..56053c9838b2 100644
+--- a/arch/riscv/include/asm/mmio.h
++++ b/arch/riscv/include/asm/mmio.h
+@@ -16,6 +16,8 @@
+ 
+ #ifndef CONFIG_MMU
+ #define pgprot_noncached(x)	(x)
++#define pgprot_writecombine(x)	(x)
++#define pgprot_device(x)	(x)
+ #endif /* CONFIG_MMU */
+ 
+ /* Generic IO read/write.  These perform native-endian accesses. */
+diff --git a/arch/riscv/include/asm/pgtable.h b/arch/riscv/include/asm/pgtable.h
+index 393f2014dfee..31d912944d8d 100644
+--- a/arch/riscv/include/asm/pgtable.h
++++ b/arch/riscv/include/asm/pgtable.h
+@@ -460,12 +460,15 @@ static inline int ptep_clear_flush_young(struct vm_area_struct *vma,
+ 
+ #else /* CONFIG_MMU */
+ 
++#define PAGE_SHARED		__pgprot(0)
+ #define PAGE_KERNEL		__pgprot(0)
+ #define swapper_pg_dir		NULL
+ #define VMALLOC_START		0
+ 
+ #define TASK_SIZE 0xffffffffUL
+ 
++static inline void __kernel_map_pages(struct page *page, int numpages, int enable) {}
++
+ #endif /* !CONFIG_MMU */
+ 
+ #define kern_addr_valid(addr)   (1) /* FIXME */
+diff --git a/arch/riscv/kernel/stacktrace.c b/arch/riscv/kernel/stacktrace.c
+index 0940681d2f68..19e46f4160cc 100644
+--- a/arch/riscv/kernel/stacktrace.c
++++ b/arch/riscv/kernel/stacktrace.c
+@@ -63,7 +63,7 @@ void notrace walk_stackframe(struct task_struct *task, struct pt_regs *regs,
+ 
+ #else /* !CONFIG_FRAME_POINTER */
+ 
+-static void notrace walk_stackframe(struct task_struct *task,
++void notrace walk_stackframe(struct task_struct *task,
+ 	struct pt_regs *regs, bool (*fn)(unsigned long, void *), void *arg)
+ {
+ 	unsigned long sp, pc;
+diff --git a/arch/x86/include/asm/dma.h b/arch/x86/include/asm/dma.h
+index 00f7cf45e699..8e95aa4b0d17 100644
+--- a/arch/x86/include/asm/dma.h
++++ b/arch/x86/include/asm/dma.h
+@@ -74,7 +74,7 @@
+ #define MAX_DMA_PFN   ((16UL * 1024 * 1024) >> PAGE_SHIFT)
+ 
+ /* 4GB broken PCI/AGP hardware bus master zone */
+-#define MAX_DMA32_PFN ((4UL * 1024 * 1024 * 1024) >> PAGE_SHIFT)
++#define MAX_DMA32_PFN (1UL << (32 - PAGE_SHIFT))
+ 
+ #ifdef CONFIG_X86_32
+ /* The maximum address that we can perform a DMA transfer to on this platform */
+diff --git a/arch/x86/include/asm/io_bitmap.h b/arch/x86/include/asm/io_bitmap.h
+index 07344d82e88e..ac1a99ffbd8d 100644
+--- a/arch/x86/include/asm/io_bitmap.h
++++ b/arch/x86/include/asm/io_bitmap.h
+@@ -17,7 +17,7 @@ struct task_struct;
+ 
+ #ifdef CONFIG_X86_IOPL_IOPERM
+ void io_bitmap_share(struct task_struct *tsk);
+-void io_bitmap_exit(void);
++void io_bitmap_exit(struct task_struct *tsk);
+ 
+ void native_tss_update_io_bitmap(void);
+ 
+@@ -29,7 +29,7 @@ void native_tss_update_io_bitmap(void);
+ 
+ #else
+ static inline void io_bitmap_share(struct task_struct *tsk) { }
+-static inline void io_bitmap_exit(void) { }
++static inline void io_bitmap_exit(struct task_struct *tsk) { }
+ static inline void tss_update_io_bitmap(void) { }
+ #endif
+ 
+diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c
+index a1806598aaa4..cf2f2a85f087 100644
+--- a/arch/x86/kernel/fpu/xstate.c
++++ b/arch/x86/kernel/fpu/xstate.c
+@@ -954,18 +954,31 @@ static inline bool xfeatures_mxcsr_quirk(u64 xfeatures)
+ 	return true;
+ }
+ 
+-/*
+- * This is similar to user_regset_copyout(), but will not add offset to
+- * the source data pointer or increment pos, count, kbuf, and ubuf.
+- */
+-static inline void
+-__copy_xstate_to_kernel(void *kbuf, const void *data,
+-			unsigned int offset, unsigned int size, unsigned int size_total)
++static void fill_gap(unsigned to, void **kbuf, unsigned *pos, unsigned *count)
+ {
+-	if (offset < size_total) {
+-		unsigned int copy = min(size, size_total - offset);
++	if (*pos < to) {
++		unsigned size = to - *pos;
++
++		if (size > *count)
++			size = *count;
++		memcpy(*kbuf, (void *)&init_fpstate.xsave + *pos, size);
++		*kbuf += size;
++		*pos += size;
++		*count -= size;
++	}
++}
+ 
+-		memcpy(kbuf + offset, data, copy);
++static void copy_part(unsigned offset, unsigned size, void *from,
++			void **kbuf, unsigned *pos, unsigned *count)
++{
++	fill_gap(offset, kbuf, pos, count);
++	if (size > *count)
++		size = *count;
++	if (size) {
++		memcpy(*kbuf, from, size);
++		*kbuf += size;
++		*pos += size;
++		*count -= size;
+ 	}
+ }
+ 
+@@ -978,8 +991,9 @@ __copy_xstate_to_kernel(void *kbuf, const void *data,
+  */
+ int copy_xstate_to_kernel(void *kbuf, struct xregs_state *xsave, unsigned int offset_start, unsigned int size_total)
+ {
+-	unsigned int offset, size;
+ 	struct xstate_header header;
++	const unsigned off_mxcsr = offsetof(struct fxregs_state, mxcsr);
++	unsigned count = size_total;
+ 	int i;
+ 
+ 	/*
+@@ -995,46 +1009,42 @@ int copy_xstate_to_kernel(void *kbuf, struct xregs_state *xsave, unsigned int of
+ 	header.xfeatures = xsave->header.xfeatures;
+ 	header.xfeatures &= ~XFEATURE_MASK_SUPERVISOR;
+ 
++	if (header.xfeatures & XFEATURE_MASK_FP)
++		copy_part(0, off_mxcsr,
++			  &xsave->i387, &kbuf, &offset_start, &count);
++	if (header.xfeatures & (XFEATURE_MASK_SSE | XFEATURE_MASK_YMM))
++		copy_part(off_mxcsr, MXCSR_AND_FLAGS_SIZE,
++			  &xsave->i387.mxcsr, &kbuf, &offset_start, &count);
++	if (header.xfeatures & XFEATURE_MASK_FP)
++		copy_part(offsetof(struct fxregs_state, st_space), 128,
++			  &xsave->i387.st_space, &kbuf, &offset_start, &count);
++	if (header.xfeatures & XFEATURE_MASK_SSE)
++		copy_part(xstate_offsets[XFEATURE_MASK_SSE], 256,
++			  &xsave->i387.xmm_space, &kbuf, &offset_start, &count);
++	/*
++	 * Fill xsave->i387.sw_reserved value for ptrace frame:
++	 */
++	copy_part(offsetof(struct fxregs_state, sw_reserved), 48,
++		  xstate_fx_sw_bytes, &kbuf, &offset_start, &count);
+ 	/*
+ 	 * Copy xregs_state->header:
+ 	 */
+-	offset = offsetof(struct xregs_state, header);
+-	size = sizeof(header);
+-
+-	__copy_xstate_to_kernel(kbuf, &header, offset, size, size_total);
++	copy_part(offsetof(struct xregs_state, header), sizeof(header),
++		  &header, &kbuf, &offset_start, &count);
+ 
+-	for (i = 0; i < XFEATURE_MAX; i++) {
++	for (i = FIRST_EXTENDED_XFEATURE; i < XFEATURE_MAX; i++) {
+ 		/*
+ 		 * Copy only in-use xstates:
+ 		 */
+ 		if ((header.xfeatures >> i) & 1) {
+ 			void *src = __raw_xsave_addr(xsave, i);
+ 
+-			offset = xstate_offsets[i];
+-			size = xstate_sizes[i];
+-
+-			/* The next component has to fit fully into the output buffer: */
+-			if (offset + size > size_total)
+-				break;
+-
+-			__copy_xstate_to_kernel(kbuf, src, offset, size, size_total);
++			copy_part(xstate_offsets[i], xstate_sizes[i],
++				  src, &kbuf, &offset_start, &count);
+ 		}
+ 
+ 	}
+-
+-	if (xfeatures_mxcsr_quirk(header.xfeatures)) {
+-		offset = offsetof(struct fxregs_state, mxcsr);
+-		size = MXCSR_AND_FLAGS_SIZE;
+-		__copy_xstate_to_kernel(kbuf, &xsave->i387.mxcsr, offset, size, size_total);
+-	}
+-
+-	/*
+-	 * Fill xsave->i387.sw_reserved value for ptrace frame:
+-	 */
+-	offset = offsetof(struct fxregs_state, sw_reserved);
+-	size = sizeof(xstate_fx_sw_bytes);
+-
+-	__copy_xstate_to_kernel(kbuf, xstate_fx_sw_bytes, offset, size, size_total);
++	fill_gap(size_total, &kbuf, &offset_start, &count);
+ 
+ 	return 0;
+ }
+diff --git a/arch/x86/kernel/ioport.c b/arch/x86/kernel/ioport.c
+index 8abeee0dd7bf..fce678f4471e 100644
+--- a/arch/x86/kernel/ioport.c
++++ b/arch/x86/kernel/ioport.c
+@@ -32,15 +32,15 @@ void io_bitmap_share(struct task_struct *tsk)
+ 	set_tsk_thread_flag(tsk, TIF_IO_BITMAP);
+ }
+ 
+-static void task_update_io_bitmap(void)
++static void task_update_io_bitmap(struct task_struct *tsk)
+ {
+-	struct thread_struct *t = &current->thread;
++	struct thread_struct *t = &tsk->thread;
+ 
+ 	if (t->iopl_emul == 3 || t->io_bitmap) {
+ 		/* TSS update is handled on exit to user space */
+-		set_thread_flag(TIF_IO_BITMAP);
++		set_tsk_thread_flag(tsk, TIF_IO_BITMAP);
+ 	} else {
+-		clear_thread_flag(TIF_IO_BITMAP);
++		clear_tsk_thread_flag(tsk, TIF_IO_BITMAP);
+ 		/* Invalidate TSS */
+ 		preempt_disable();
+ 		tss_update_io_bitmap();
+@@ -48,12 +48,12 @@ static void task_update_io_bitmap(void)
+ 	}
+ }
+ 
+-void io_bitmap_exit(void)
++void io_bitmap_exit(struct task_struct *tsk)
+ {
+-	struct io_bitmap *iobm = current->thread.io_bitmap;
++	struct io_bitmap *iobm = tsk->thread.io_bitmap;
+ 
+-	current->thread.io_bitmap = NULL;
+-	task_update_io_bitmap();
++	tsk->thread.io_bitmap = NULL;
++	task_update_io_bitmap(tsk);
+ 	if (iobm && refcount_dec_and_test(&iobm->refcnt))
+ 		kfree(iobm);
+ }
+@@ -101,7 +101,7 @@ long ksys_ioperm(unsigned long from, unsigned long num, int turn_on)
+ 		if (!iobm)
+ 			return -ENOMEM;
+ 		refcount_set(&iobm->refcnt, 1);
+-		io_bitmap_exit();
++		io_bitmap_exit(current);
+ 	}
+ 
+ 	/*
+@@ -133,7 +133,7 @@ long ksys_ioperm(unsigned long from, unsigned long num, int turn_on)
+ 	}
+ 	/* All permissions dropped? */
+ 	if (max_long == UINT_MAX) {
+-		io_bitmap_exit();
++		io_bitmap_exit(current);
+ 		return 0;
+ 	}
+ 
+@@ -191,7 +191,7 @@ SYSCALL_DEFINE1(iopl, unsigned int, level)
+ 	}
+ 
+ 	t->iopl_emul = level;
+-	task_update_io_bitmap();
++	task_update_io_bitmap(current);
+ 
+ 	return 0;
+ }
+diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
+index 3053c85e0e42..9898f672b81d 100644
+--- a/arch/x86/kernel/process.c
++++ b/arch/x86/kernel/process.c
+@@ -97,7 +97,7 @@ int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src)
+ }
+ 
+ /*
+- * Free current thread data structures etc..
++ * Free thread data structures etc..
+  */
+ void exit_thread(struct task_struct *tsk)
+ {
+@@ -105,7 +105,7 @@ void exit_thread(struct task_struct *tsk)
+ 	struct fpu *fpu = &t->fpu;
+ 
+ 	if (test_thread_flag(TIF_IO_BITMAP))
+-		io_bitmap_exit();
++		io_bitmap_exit(tsk);
+ 
+ 	free_vm86(t);
+ 
+diff --git a/block/blk-core.c b/block/blk-core.c
+index 60dc9552ef8d..92232907605c 100644
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -885,14 +885,11 @@ generic_make_request_checks(struct bio *bio)
+ 	}
+ 
+ 	/*
+-	 * Non-mq queues do not honor REQ_NOWAIT, so complete a bio
+-	 * with BLK_STS_AGAIN status in order to catch -EAGAIN and
+-	 * to give a chance to the caller to repeat request gracefully.
++	 * For a REQ_NOWAIT based request, return -EOPNOTSUPP
++	 * if queue is not a request based queue.
+ 	 */
+-	if ((bio->bi_opf & REQ_NOWAIT) && !queue_is_mq(q)) {
+-		status = BLK_STS_AGAIN;
+-		goto end_io;
+-	}
++	if ((bio->bi_opf & REQ_NOWAIT) && !queue_is_mq(q))
++		goto not_supported;
+ 
+ 	if (should_fail_bio(bio))
+ 		goto end_io;
+diff --git a/drivers/clk/qcom/gcc-sm8150.c b/drivers/clk/qcom/gcc-sm8150.c
+index 20877214acff..e3959ff5cb55 100644
+--- a/drivers/clk/qcom/gcc-sm8150.c
++++ b/drivers/clk/qcom/gcc-sm8150.c
+@@ -75,8 +75,7 @@ static struct clk_alpha_pll_postdiv gpll0_out_even = {
+ 	.clkr.hw.init = &(struct clk_init_data){
+ 		.name = "gpll0_out_even",
+ 		.parent_data = &(const struct clk_parent_data){
+-			.fw_name = "bi_tcxo",
+-			.name = "bi_tcxo",
++			.hw = &gpll0.clkr.hw,
+ 		},
+ 		.num_parents = 1,
+ 		.ops = &clk_trion_pll_postdiv_ops,
+diff --git a/drivers/clk/ti/clk-33xx.c b/drivers/clk/ti/clk-33xx.c
+index e001b9bcb6bf..7dc30dd6c8d5 100644
+--- a/drivers/clk/ti/clk-33xx.c
++++ b/drivers/clk/ti/clk-33xx.c
+@@ -212,7 +212,7 @@ static const struct omap_clkctrl_reg_data am3_mpu_clkctrl_regs[] __initconst = {
+ };
+ 
+ static const struct omap_clkctrl_reg_data am3_l4_rtc_clkctrl_regs[] __initconst = {
+-	{ AM3_L4_RTC_RTC_CLKCTRL, NULL, CLKF_SW_SUP, "clk_32768_ck" },
++	{ AM3_L4_RTC_RTC_CLKCTRL, NULL, CLKF_SW_SUP, "clk-24mhz-clkctrl:0000:0" },
+ 	{ 0 },
+ };
+ 
+diff --git a/drivers/crypto/chelsio/chtls/chtls_io.c b/drivers/crypto/chelsio/chtls/chtls_io.c
+index 5cf9b021220b..fdaed234ae92 100644
+--- a/drivers/crypto/chelsio/chtls/chtls_io.c
++++ b/drivers/crypto/chelsio/chtls/chtls_io.c
+@@ -682,7 +682,7 @@ int chtls_push_frames(struct chtls_sock *csk, int comp)
+ 				make_tx_data_wr(sk, skb, immdlen, len,
+ 						credits_needed, completion);
+ 			tp->snd_nxt += len;
+-			tp->lsndtime = tcp_time_stamp(tp);
++			tp->lsndtime = tcp_jiffies32;
+ 			if (completion)
+ 				ULP_SKB_CB(skb)->flags &= ~ULPCB_FLAG_NEED_HDR;
+ 		} else {
+diff --git a/drivers/gpio/gpio-bcm-kona.c b/drivers/gpio/gpio-bcm-kona.c
+index baee8c3f06ad..cf3687a7925f 100644
+--- a/drivers/gpio/gpio-bcm-kona.c
++++ b/drivers/gpio/gpio-bcm-kona.c
+@@ -625,7 +625,7 @@ static int bcm_kona_gpio_probe(struct platform_device *pdev)
+ 
+ 	kona_gpio->reg_base = devm_platform_ioremap_resource(pdev, 0);
+ 	if (IS_ERR(kona_gpio->reg_base)) {
+-		ret = -ENXIO;
++		ret = PTR_ERR(kona_gpio->reg_base);
+ 		goto err_irq_domain;
+ 	}
+ 
+diff --git a/drivers/gpio/gpio-exar.c b/drivers/gpio/gpio-exar.c
+index da1ef0b1c291..b1accfba017d 100644
+--- a/drivers/gpio/gpio-exar.c
++++ b/drivers/gpio/gpio-exar.c
+@@ -148,8 +148,10 @@ static int gpio_exar_probe(struct platform_device *pdev)
+ 	mutex_init(&exar_gpio->lock);
+ 
+ 	index = ida_simple_get(&ida_index, 0, 0, GFP_KERNEL);
+-	if (index < 0)
+-		goto err_destroy;
++	if (index < 0) {
++		ret = index;
++		goto err_mutex_destroy;
++	}
+ 
+ 	sprintf(exar_gpio->name, "exar_gpio%d", index);
+ 	exar_gpio->gpio_chip.label = exar_gpio->name;
+@@ -176,6 +178,7 @@ static int gpio_exar_probe(struct platform_device *pdev)
+ 
+ err_destroy:
+ 	ida_simple_remove(&ida_index, index);
++err_mutex_destroy:
+ 	mutex_destroy(&exar_gpio->lock);
+ 	return ret;
+ }
+diff --git a/drivers/gpio/gpio-mvebu.c b/drivers/gpio/gpio-mvebu.c
+index d2b999c7987f..f0c5433a327f 100644
+--- a/drivers/gpio/gpio-mvebu.c
++++ b/drivers/gpio/gpio-mvebu.c
+@@ -782,6 +782,15 @@ static int mvebu_pwm_probe(struct platform_device *pdev,
+ 				     "marvell,armada-370-gpio"))
+ 		return 0;
+ 
++	/*
++	 * There are only two sets of PWM configuration registers for
++	 * all the GPIO lines on those SoCs which this driver reserves
++	 * for the first two GPIO chips. So if the resource is missing
++	 * we can't treat it as an error.
++	 */
++	if (!platform_get_resource_byname(pdev, IORESOURCE_MEM, "pwm"))
++		return 0;
++
+ 	if (IS_ERR(mvchip->clk))
+ 		return PTR_ERR(mvchip->clk);
+ 
+@@ -804,12 +813,6 @@ static int mvebu_pwm_probe(struct platform_device *pdev,
+ 	mvchip->mvpwm = mvpwm;
+ 	mvpwm->mvchip = mvchip;
+ 
+-	/*
+-	 * There are only two sets of PWM configuration registers for
+-	 * all the GPIO lines on those SoCs which this driver reserves
+-	 * for the first two GPIO chips. So if the resource is missing
+-	 * we can't treat it as an error.
+-	 */
+ 	mvpwm->membase = devm_platform_ioremap_resource_byname(pdev, "pwm");
+ 	if (IS_ERR(mvpwm->membase))
+ 		return PTR_ERR(mvpwm->membase);
+diff --git a/drivers/gpio/gpio-pxa.c b/drivers/gpio/gpio-pxa.c
+index 9888b62f37af..432c487f77b4 100644
+--- a/drivers/gpio/gpio-pxa.c
++++ b/drivers/gpio/gpio-pxa.c
+@@ -663,8 +663,8 @@ static int pxa_gpio_probe(struct platform_device *pdev)
+ 	pchip->irq1 = irq1;
+ 
+ 	gpio_reg_base = devm_platform_ioremap_resource(pdev, 0);
+-	if (!gpio_reg_base)
+-		return -EINVAL;
++	if (IS_ERR(gpio_reg_base))
++		return PTR_ERR(gpio_reg_base);
+ 
+ 	clk = clk_get(&pdev->dev, NULL);
+ 	if (IS_ERR(clk)) {
+diff --git a/drivers/gpio/gpio-tegra.c b/drivers/gpio/gpio-tegra.c
+index acb99eff9939..86568154cdb3 100644
+--- a/drivers/gpio/gpio-tegra.c
++++ b/drivers/gpio/gpio-tegra.c
+@@ -368,6 +368,7 @@ static void tegra_gpio_irq_shutdown(struct irq_data *d)
+ 	struct tegra_gpio_info *tgi = bank->tgi;
+ 	unsigned int gpio = d->hwirq;
+ 
++	tegra_gpio_irq_mask(d);
+ 	gpiochip_unlock_as_irq(&tgi->gc, gpio);
+ }
+ 
+diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
+index 00fb91feba70..2f350e3df965 100644
+--- a/drivers/gpio/gpiolib.c
++++ b/drivers/gpio/gpiolib.c
+@@ -4025,7 +4025,9 @@ int gpiochip_lock_as_irq(struct gpio_chip *chip, unsigned int offset)
+ 		}
+ 	}
+ 
+-	if (test_bit(FLAG_IS_OUT, &desc->flags)) {
++	/* To be valid for IRQ the line needs to be input or open drain */
++	if (test_bit(FLAG_IS_OUT, &desc->flags) &&
++	    !test_bit(FLAG_OPEN_DRAIN, &desc->flags)) {
+ 		chip_err(chip,
+ 			 "%s: tried to flag a GPIO set as output for IRQ\n",
+ 			 __func__);
+@@ -4088,7 +4090,12 @@ void gpiochip_enable_irq(struct gpio_chip *chip, unsigned int offset)
+ 
+ 	if (!IS_ERR(desc) &&
+ 	    !WARN_ON(!test_bit(FLAG_USED_AS_IRQ, &desc->flags))) {
+-		WARN_ON(test_bit(FLAG_IS_OUT, &desc->flags));
++		/*
++		 * We must not be output when using IRQ UNLESS we are
++		 * open drain.
++		 */
++		WARN_ON(test_bit(FLAG_IS_OUT, &desc->flags) &&
++			!test_bit(FLAG_OPEN_DRAIN, &desc->flags));
+ 		set_bit(FLAG_IRQ_IS_ENABLED, &desc->flags);
+ 	}
+ }
+diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
+index fa8ac9d19a7a..6326c1792270 100644
+--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c
+@@ -1304,7 +1304,7 @@ int amdgpu_amdkfd_gpuvm_free_memory_of_gpu(
+ 	}
+ 
+ 	/* Free the BO*/
+-	amdgpu_bo_unref(&mem->bo);
++	drm_gem_object_put_unlocked(&mem->bo->tbo.base);
+ 	mutex_destroy(&mem->lock);
+ 	kfree(mem);
+ 
+@@ -1647,7 +1647,8 @@ int amdgpu_amdkfd_gpuvm_import_dmabuf(struct kgd_dev *kgd,
+ 		 ALLOC_MEM_FLAGS_VRAM : ALLOC_MEM_FLAGS_GTT) |
+ 		ALLOC_MEM_FLAGS_WRITABLE | ALLOC_MEM_FLAGS_EXECUTABLE;
+ 
+-	(*mem)->bo = amdgpu_bo_ref(bo);
++	drm_gem_object_get(&bo->tbo.base);
++	(*mem)->bo = bo;
+ 	(*mem)->va = va;
+ 	(*mem)->domain = (bo->preferred_domains & AMDGPU_GEM_DOMAIN_VRAM) ?
+ 		AMDGPU_GEM_DOMAIN_VRAM : AMDGPU_GEM_DOMAIN_GTT;
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+index 02702597ddeb..012df3d574bf 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
+@@ -4241,11 +4241,7 @@ static int gfx_v10_0_set_powergating_state(void *handle,
+ 	switch (adev->asic_type) {
+ 	case CHIP_NAVI10:
+ 	case CHIP_NAVI14:
+-		if (!enable) {
+-			amdgpu_gfx_off_ctrl(adev, false);
+-			cancel_delayed_work_sync(&adev->gfx.gfx_off_delay_work);
+-		} else
+-			amdgpu_gfx_off_ctrl(adev, true);
++		amdgpu_gfx_off_ctrl(adev, enable);
+ 		break;
+ 	default:
+ 		break;
+diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+index 906648fca9ef..914dbd901b98 100644
+--- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
+@@ -4734,10 +4734,9 @@ static int gfx_v9_0_set_powergating_state(void *handle,
+ 	switch (adev->asic_type) {
+ 	case CHIP_RAVEN:
+ 	case CHIP_RENOIR:
+-		if (!enable) {
++		if (!enable)
+ 			amdgpu_gfx_off_ctrl(adev, false);
+-			cancel_delayed_work_sync(&adev->gfx.gfx_off_delay_work);
+-		}
++
+ 		if (adev->pg_flags & AMD_PG_SUPPORT_RLC_SMU_HS) {
+ 			gfx_v9_0_enable_sck_slow_down_on_power_up(adev, true);
+ 			gfx_v9_0_enable_sck_slow_down_on_power_down(adev, true);
+@@ -4761,12 +4760,7 @@ static int gfx_v9_0_set_powergating_state(void *handle,
+ 			amdgpu_gfx_off_ctrl(adev, true);
+ 		break;
+ 	case CHIP_VEGA12:
+-		if (!enable) {
+-			amdgpu_gfx_off_ctrl(adev, false);
+-			cancel_delayed_work_sync(&adev->gfx.gfx_off_delay_work);
+-		} else {
+-			amdgpu_gfx_off_ctrl(adev, true);
+-		}
++		amdgpu_gfx_off_ctrl(adev, enable);
+ 		break;
+ 	default:
+ 		break;
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+index 0cd11d3d4cf4..8e7cffe10cc5 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -7746,13 +7746,6 @@ static int dm_update_plane_state(struct dc *dc,
+ 			return -EINVAL;
+ 		}
+ 
+-		if (new_plane_state->crtc_x <= -new_acrtc->max_cursor_width ||
+-			new_plane_state->crtc_y <= -new_acrtc->max_cursor_height) {
+-			DRM_DEBUG_ATOMIC("Bad cursor position %d, %d\n",
+-							 new_plane_state->crtc_x, new_plane_state->crtc_y);
+-			return -EINVAL;
+-		}
+-
+ 		return 0;
+ 	}
+ 
+diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c
+index 3abeff7722e3..e80371542622 100644
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_hdcp.c
+@@ -316,15 +316,15 @@ static void update_config(void *handle, struct cp_psp_stream_config *config)
+ 	struct mod_hdcp_display *display = &hdcp_work[link_index].display;
+ 	struct mod_hdcp_link *link = &hdcp_work[link_index].link;
+ 
+-	memset(display, 0, sizeof(*display));
+-	memset(link, 0, sizeof(*link));
+-
+-	display->index = aconnector->base.index;
+-
+ 	if (config->dpms_off) {
+ 		hdcp_remove_display(hdcp_work, link_index, aconnector);
+ 		return;
+ 	}
++
++	memset(display, 0, sizeof(*display));
++	memset(link, 0, sizeof(*link));
++
++	display->index = aconnector->base.index;
+ 	display->state = MOD_HDCP_DISPLAY_ACTIVE;
+ 
+ 	if (aconnector->dc_sink != NULL)
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index b3987124183a..32a07665863f 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -763,6 +763,29 @@ static bool disable_all_writeback_pipes_for_stream(
+ 	return true;
+ }
+ 
++void apply_ctx_interdependent_lock(struct dc *dc, struct dc_state *context, struct dc_stream_state *stream, bool lock)
++{
++	int i = 0;
++
++	/* Checks if interdependent update function pointer is NULL or not, takes care of DCE110 case */
++	if (dc->hwss.interdependent_update_lock)
++		dc->hwss.interdependent_update_lock(dc, context, lock);
++	else {
++		for (i = 0; i < dc->res_pool->pipe_count; i++) {
++			struct pipe_ctx *pipe_ctx = &context->res_ctx.pipe_ctx[i];
++			struct pipe_ctx *old_pipe_ctx = &dc->current_state->res_ctx.pipe_ctx[i];
++
++			// Copied conditions that were previously in dce110_apply_ctx_for_surface
++			if (stream == pipe_ctx->stream) {
++				if (!pipe_ctx->top_pipe &&
++					(pipe_ctx->plane_state || old_pipe_ctx->plane_state))
++					dc->hwss.pipe_control_lock(dc, pipe_ctx, lock);
++				break;
++			}
++		}
++	}
++}
++
+ static void disable_dangling_plane(struct dc *dc, struct dc_state *context)
+ {
+ 	int i, j;
+@@ -788,11 +811,20 @@ static void disable_dangling_plane(struct dc *dc, struct dc_state *context)
+ 		if (should_disable && old_stream) {
+ 			dc_rem_all_planes_for_stream(dc, old_stream, dangling_context);
+ 			disable_all_writeback_pipes_for_stream(dc, old_stream, dangling_context);
+-			if (dc->hwss.apply_ctx_for_surface)
++
++			if (dc->hwss.apply_ctx_for_surface) {
++				apply_ctx_interdependent_lock(dc, dc->current_state, old_stream, true);
+ 				dc->hwss.apply_ctx_for_surface(dc, old_stream, 0, dangling_context);
++				apply_ctx_interdependent_lock(dc, dc->current_state, old_stream, false);
++				dc->hwss.post_unlock_program_front_end(dc, dangling_context);
++			}
++			if (dc->hwss.program_front_end_for_ctx) {
++				dc->hwss.interdependent_update_lock(dc, dc->current_state, true);
++				dc->hwss.program_front_end_for_ctx(dc, dangling_context);
++				dc->hwss.interdependent_update_lock(dc, dc->current_state, false);
++				dc->hwss.post_unlock_program_front_end(dc, dangling_context);
++			}
+ 		}
+-		if (dc->hwss.program_front_end_for_ctx)
+-			dc->hwss.program_front_end_for_ctx(dc, dangling_context);
+ 	}
+ 
+ 	current_ctx = dc->current_state;
+@@ -1211,16 +1243,19 @@ static enum dc_status dc_commit_state_no_check(struct dc *dc, struct dc_state *c
+ 	/* re-program planes for existing stream, in case we need to
+ 	 * free up plane resource for later use
+ 	 */
+-	if (dc->hwss.apply_ctx_for_surface)
++	if (dc->hwss.apply_ctx_for_surface) {
+ 		for (i = 0; i < context->stream_count; i++) {
+ 			if (context->streams[i]->mode_changed)
+ 				continue;
+-
++			apply_ctx_interdependent_lock(dc, context, context->streams[i], true);
+ 			dc->hwss.apply_ctx_for_surface(
+ 				dc, context->streams[i],
+ 				context->stream_status[i].plane_count,
+ 				context); /* use new pipe config in new context */
++			apply_ctx_interdependent_lock(dc, context, context->streams[i], false);
++			dc->hwss.post_unlock_program_front_end(dc, context);
+ 		}
++	}
+ 
+ 	/* Program hardware */
+ 	for (i = 0; i < dc->res_pool->pipe_count; i++) {
+@@ -1239,19 +1274,27 @@ static enum dc_status dc_commit_state_no_check(struct dc *dc, struct dc_state *c
+ 	}
+ 
+ 	/* Program all planes within new context*/
+-	if (dc->hwss.program_front_end_for_ctx)
++	if (dc->hwss.program_front_end_for_ctx) {
++		dc->hwss.interdependent_update_lock(dc, context, true);
+ 		dc->hwss.program_front_end_for_ctx(dc, context);
++		dc->hwss.interdependent_update_lock(dc, context, false);
++		dc->hwss.post_unlock_program_front_end(dc, context);
++	}
+ 	for (i = 0; i < context->stream_count; i++) {
+ 		const struct dc_link *link = context->streams[i]->link;
+ 
+ 		if (!context->streams[i]->mode_changed)
+ 			continue;
+ 
+-		if (dc->hwss.apply_ctx_for_surface)
++		if (dc->hwss.apply_ctx_for_surface) {
++			apply_ctx_interdependent_lock(dc, context, context->streams[i], true);
+ 			dc->hwss.apply_ctx_for_surface(
+ 					dc, context->streams[i],
+ 					context->stream_status[i].plane_count,
+ 					context);
++			apply_ctx_interdependent_lock(dc, context, context->streams[i], false);
++			dc->hwss.post_unlock_program_front_end(dc, context);
++		}
+ 
+ 		/*
+ 		 * enable stereo
+@@ -1735,14 +1778,15 @@ static enum surface_update_type check_update_surfaces_for_stream(
+ 
+ 		if (stream_update->wb_update)
+ 			su_flags->bits.wb_update = 1;
++
++		if (stream_update->dsc_config)
++			su_flags->bits.dsc_changed = 1;
++
+ 		if (su_flags->raw != 0)
+ 			overall_type = UPDATE_TYPE_FULL;
+ 
+ 		if (stream_update->output_csc_transform || stream_update->output_color_space)
+ 			su_flags->bits.out_csc = 1;
+-
+-		if (stream_update->dsc_config)
+-			overall_type = UPDATE_TYPE_FULL;
+ 	}
+ 
+ 	for (i = 0 ; i < surface_count; i++) {
+@@ -1777,8 +1821,11 @@ enum surface_update_type dc_check_update_surfaces_for_stream(
+ 
+ 	type = check_update_surfaces_for_stream(dc, updates, surface_count, stream_update, stream_status);
+ 	if (type == UPDATE_TYPE_FULL) {
+-		if (stream_update)
++		if (stream_update) {
++			uint32_t dsc_changed = stream_update->stream->update_flags.bits.dsc_changed;
+ 			stream_update->stream->update_flags.raw = 0xFFFFFFFF;
++			stream_update->stream->update_flags.bits.dsc_changed = dsc_changed;
++		}
+ 		for (i = 0; i < surface_count; i++)
+ 			updates[i].surface->update_flags.raw = 0xFFFFFFFF;
+ 	}
+@@ -2094,18 +2141,14 @@ static void commit_planes_do_stream_update(struct dc *dc,
+ 				}
+ 			}
+ 
+-			if (stream_update->dsc_config && dc->hwss.pipe_control_lock_global) {
+-				dc->hwss.pipe_control_lock_global(dc, pipe_ctx, true);
+-				dp_update_dsc_config(pipe_ctx);
+-				dc->hwss.pipe_control_lock_global(dc, pipe_ctx, false);
+-			}
+ 			/* Full fe update*/
+ 			if (update_type == UPDATE_TYPE_FAST)
+ 				continue;
+ 
+-			if (stream_update->dpms_off) {
+-				dc->hwss.pipe_control_lock(dc, pipe_ctx, true);
++			if (stream_update->dsc_config)
++				dp_update_dsc_config(pipe_ctx);
+ 
++			if (stream_update->dpms_off) {
+ 				if (*stream_update->dpms_off) {
+ 					core_link_disable_stream(pipe_ctx);
+ 					/* for dpms, keep acquired resources*/
+@@ -2119,8 +2162,6 @@ static void commit_planes_do_stream_update(struct dc *dc,
+ 
+ 					core_link_enable_stream(dc->current_state, pipe_ctx);
+ 				}
+-
+-				dc->hwss.pipe_control_lock(dc, pipe_ctx, false);
+ 			}
+ 
+ 			if (stream_update->abm_level && pipe_ctx->stream_res.abm) {
+@@ -2176,6 +2217,27 @@ static void commit_planes_for_stream(struct dc *dc,
+ 		context_clock_trace(dc, context);
+ 	}
+ 
++	for (j = 0; j < dc->res_pool->pipe_count; j++) {
++		struct pipe_ctx *pipe_ctx = &context->res_ctx.pipe_ctx[j];
++
++		if (!pipe_ctx->top_pipe &&
++			!pipe_ctx->prev_odm_pipe &&
++			pipe_ctx->stream &&
++			pipe_ctx->stream == stream) {
++			top_pipe_to_program = pipe_ctx;
++		}
++	}
++
++	if ((update_type != UPDATE_TYPE_FAST) && dc->hwss.interdependent_update_lock)
++		dc->hwss.interdependent_update_lock(dc, context, true);
++	else
++		/* Lock the top pipe while updating plane addrs, since freesync requires
++		 *  plane addr update event triggers to be synchronized.
++		 *  top_pipe_to_program is expected to never be NULL
++		 */
++		dc->hwss.pipe_control_lock(dc, top_pipe_to_program, true);
++
++
+ 	// Stream updates
+ 	if (stream_update)
+ 		commit_planes_do_stream_update(dc, stream, stream_update, update_type, context);
+@@ -2190,6 +2252,12 @@ static void commit_planes_for_stream(struct dc *dc,
+ 		if (dc->hwss.program_front_end_for_ctx)
+ 			dc->hwss.program_front_end_for_ctx(dc, context);
+ 
++		if ((update_type != UPDATE_TYPE_FAST) && dc->hwss.interdependent_update_lock)
++			dc->hwss.interdependent_update_lock(dc, context, false);
++		else
++			dc->hwss.pipe_control_lock(dc, top_pipe_to_program, false);
++
++		dc->hwss.post_unlock_program_front_end(dc, context);
+ 		return;
+ 	}
+ 
+@@ -2225,8 +2293,6 @@ static void commit_planes_for_stream(struct dc *dc,
+ 			pipe_ctx->stream == stream) {
+ 			struct dc_stream_status *stream_status = NULL;
+ 
+-			top_pipe_to_program = pipe_ctx;
+-
+ 			if (!pipe_ctx->plane_state)
+ 				continue;
+ 
+@@ -2271,12 +2337,6 @@ static void commit_planes_for_stream(struct dc *dc,
+ 
+ 	// Update Type FAST, Surface updates
+ 	if (update_type == UPDATE_TYPE_FAST) {
+-		/* Lock the top pipe while updating plane addrs, since freesync requires
+-		 *  plane addr update event triggers to be synchronized.
+-		 *  top_pipe_to_program is expected to never be NULL
+-		 */
+-		dc->hwss.pipe_control_lock(dc, top_pipe_to_program, true);
+-
+ 		if (dc->hwss.set_flip_control_gsl)
+ 			for (i = 0; i < surface_count; i++) {
+ 				struct dc_plane_state *plane_state = srf_updates[i].surface;
+@@ -2318,9 +2378,15 @@ static void commit_planes_for_stream(struct dc *dc,
+ 					dc->hwss.update_plane_addr(dc, pipe_ctx);
+ 			}
+ 		}
++	}
+ 
++	if ((update_type != UPDATE_TYPE_FAST) && dc->hwss.interdependent_update_lock)
++		dc->hwss.interdependent_update_lock(dc, context, false);
++	else
+ 		dc->hwss.pipe_control_lock(dc, top_pipe_to_program, false);
+-	}
++
++	if (update_type != UPDATE_TYPE_FAST)
++		dc->hwss.post_unlock_program_front_end(dc, context);
+ 
+ 	// Fire manual trigger only when bottom plane is flipped
+ 	for (j = 0; j < dc->res_pool->pipe_count; j++) {
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_stream.c b/drivers/gpu/drm/amd/display/dc/core/dc_stream.c
+index 8c20e9e907b2..4f0e7203dba4 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_stream.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_stream.c
+@@ -231,34 +231,6 @@ struct dc_stream_status *dc_stream_get_status(
+ 	return dc_stream_get_status_from_state(dc->current_state, stream);
+ }
+ 
+-static void delay_cursor_until_vupdate(struct pipe_ctx *pipe_ctx, struct dc *dc)
+-{
+-#if defined(CONFIG_DRM_AMD_DC_DCN)
+-	unsigned int vupdate_line;
+-	unsigned int lines_to_vupdate, us_to_vupdate, vpos, nvpos;
+-	struct dc_stream_state *stream = pipe_ctx->stream;
+-	unsigned int us_per_line;
+-
+-	if (!dc->hwss.get_vupdate_offset_from_vsync)
+-		return;
+-
+-	vupdate_line = dc->hwss.get_vupdate_offset_from_vsync(pipe_ctx);
+-	if (!dc_stream_get_crtc_position(dc, &stream, 1, &vpos, &nvpos))
+-		return;
+-
+-	if (vpos >= vupdate_line)
+-		return;
+-
+-	us_per_line =
+-		stream->timing.h_total * 10000 / stream->timing.pix_clk_100hz;
+-	lines_to_vupdate = vupdate_line - vpos;
+-	us_to_vupdate = lines_to_vupdate * us_per_line;
+-
+-	/* 70 us is a conservative estimate of cursor update time*/
+-	if (us_to_vupdate < 70)
+-		udelay(us_to_vupdate);
+-#endif
+-}
+ 
+ /**
+  * dc_stream_set_cursor_attributes() - Update cursor attributes and set cursor surface address
+@@ -298,9 +270,7 @@ bool dc_stream_set_cursor_attributes(
+ 
+ 		if (!pipe_to_program) {
+ 			pipe_to_program = pipe_ctx;
+-
+-			delay_cursor_until_vupdate(pipe_ctx, dc);
+-			dc->hwss.pipe_control_lock(dc, pipe_to_program, true);
++			dc->hwss.cursor_lock(dc, pipe_to_program, true);
+ 		}
+ 
+ 		dc->hwss.set_cursor_attribute(pipe_ctx);
+@@ -309,7 +279,7 @@ bool dc_stream_set_cursor_attributes(
+ 	}
+ 
+ 	if (pipe_to_program)
+-		dc->hwss.pipe_control_lock(dc, pipe_to_program, false);
++		dc->hwss.cursor_lock(dc, pipe_to_program, false);
+ 
+ 	return true;
+ }
+@@ -349,16 +319,14 @@ bool dc_stream_set_cursor_position(
+ 
+ 		if (!pipe_to_program) {
+ 			pipe_to_program = pipe_ctx;
+-
+-			delay_cursor_until_vupdate(pipe_ctx, dc);
+-			dc->hwss.pipe_control_lock(dc, pipe_to_program, true);
++			dc->hwss.cursor_lock(dc, pipe_to_program, true);
+ 		}
+ 
+ 		dc->hwss.set_cursor_position(pipe_ctx);
+ 	}
+ 
+ 	if (pipe_to_program)
+-		dc->hwss.pipe_control_lock(dc, pipe_to_program, false);
++		dc->hwss.cursor_lock(dc, pipe_to_program, false);
+ 
+ 	return true;
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/dc_stream.h b/drivers/gpu/drm/amd/display/dc/dc_stream.h
+index 92096de79dec..a5c7ef47b8d3 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc_stream.h
++++ b/drivers/gpu/drm/amd/display/dc/dc_stream.h
+@@ -118,6 +118,7 @@ union stream_update_flags {
+ 		uint32_t dpms_off:1;
+ 		uint32_t gamut_remap:1;
+ 		uint32_t wb_update:1;
++		uint32_t dsc_changed : 1;
+ 	} bits;
+ 
+ 	uint32_t raw;
+diff --git a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
+index 5b689273ff44..454a123b92fc 100644
+--- a/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
++++ b/drivers/gpu/drm/amd/display/dc/dce110/dce110_hw_sequencer.c
+@@ -2574,17 +2574,6 @@ static void dce110_apply_ctx_for_surface(
+ 	if (dc->fbc_compressor)
+ 		dc->fbc_compressor->funcs->disable_fbc(dc->fbc_compressor);
+ 
+-	for (i = 0; i < dc->res_pool->pipe_count; i++) {
+-		struct pipe_ctx *pipe_ctx = &context->res_ctx.pipe_ctx[i];
+-		struct pipe_ctx *old_pipe_ctx = &dc->current_state->res_ctx.pipe_ctx[i];
+-
+-		if (stream == pipe_ctx->stream) {
+-			if (!pipe_ctx->top_pipe &&
+-				(pipe_ctx->plane_state || old_pipe_ctx->plane_state))
+-				dc->hwss.pipe_control_lock(dc, pipe_ctx, true);
+-		}
+-	}
+-
+ 	for (i = 0; i < dc->res_pool->pipe_count; i++) {
+ 		struct pipe_ctx *pipe_ctx = &context->res_ctx.pipe_ctx[i];
+ 
+@@ -2607,20 +2596,16 @@ static void dce110_apply_ctx_for_surface(
+ 
+ 	}
+ 
+-	for (i = 0; i < dc->res_pool->pipe_count; i++) {
+-		struct pipe_ctx *pipe_ctx = &context->res_ctx.pipe_ctx[i];
+-		struct pipe_ctx *old_pipe_ctx = &dc->current_state->res_ctx.pipe_ctx[i];
+-
+-		if ((stream == pipe_ctx->stream) &&
+-			(!pipe_ctx->top_pipe) &&
+-			(pipe_ctx->plane_state || old_pipe_ctx->plane_state))
+-			dc->hwss.pipe_control_lock(dc, pipe_ctx, false);
+-	}
+-
+ 	if (dc->fbc_compressor)
+ 		enable_fbc(dc, context);
+ }
+ 
++static void dce110_post_unlock_program_front_end(
++		struct dc *dc,
++		struct dc_state *context)
++{
++}
++
+ static void dce110_power_down_fe(struct dc *dc, struct pipe_ctx *pipe_ctx)
+ {
+ 	struct dce_hwseq *hws = dc->hwseq;
+@@ -2722,6 +2707,7 @@ static const struct hw_sequencer_funcs dce110_funcs = {
+ 	.init_hw = init_hw,
+ 	.apply_ctx_to_hw = dce110_apply_ctx_to_hw,
+ 	.apply_ctx_for_surface = dce110_apply_ctx_for_surface,
++	.post_unlock_program_front_end = dce110_post_unlock_program_front_end,
+ 	.update_plane_addr = update_plane_addr,
+ 	.update_pending_status = dce110_update_pending_status,
+ 	.enable_accelerated_mode = dce110_enable_accelerated_mode,
+@@ -2736,6 +2722,8 @@ static const struct hw_sequencer_funcs dce110_funcs = {
+ 	.disable_audio_stream = dce110_disable_audio_stream,
+ 	.disable_plane = dce110_power_down_fe,
+ 	.pipe_control_lock = dce_pipe_control_lock,
++	.interdependent_update_lock = NULL,
++	.cursor_lock = dce_pipe_control_lock,
+ 	.prepare_bandwidth = dce110_prepare_bandwidth,
+ 	.optimize_bandwidth = dce110_optimize_bandwidth,
+ 	.set_drr = set_drr,
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+index 1008ac8a0f2a..0c987b5d68e2 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.c
+@@ -82,7 +82,7 @@ void print_microsec(struct dc_context *dc_ctx,
+ 			us_x10 % frac);
+ }
+ 
+-static void dcn10_lock_all_pipes(struct dc *dc,
++void dcn10_lock_all_pipes(struct dc *dc,
+ 	struct dc_state *context,
+ 	bool lock)
+ {
+@@ -93,6 +93,7 @@ static void dcn10_lock_all_pipes(struct dc *dc,
+ 	for (i = 0; i < dc->res_pool->pipe_count; i++) {
+ 		pipe_ctx = &context->res_ctx.pipe_ctx[i];
+ 		tg = pipe_ctx->stream_res.tg;
++
+ 		/*
+ 		 * Only lock the top pipe's tg to prevent redundant
+ 		 * (un)locking. Also skip if pipe is disabled.
+@@ -103,9 +104,9 @@ static void dcn10_lock_all_pipes(struct dc *dc,
+ 			continue;
+ 
+ 		if (lock)
+-			tg->funcs->lock(tg);
++			dc->hwss.pipe_control_lock(dc, pipe_ctx, true);
+ 		else
+-			tg->funcs->unlock(tg);
++			dc->hwss.pipe_control_lock(dc, pipe_ctx, false);
+ 	}
+ }
+ 
+@@ -1576,7 +1577,7 @@ void dcn10_pipe_control_lock(
+ 	/* use TG master update lock to lock everything on the TG
+ 	 * therefore only top pipe need to lock
+ 	 */
+-	if (pipe->top_pipe)
++	if (!pipe || pipe->top_pipe)
+ 		return;
+ 
+ 	if (dc->debug.sanity_checks)
+@@ -1591,6 +1592,85 @@ void dcn10_pipe_control_lock(
+ 		hws->funcs.verify_allow_pstate_change_high(dc);
+ }
+ 
++/**
++ * delay_cursor_until_vupdate() - Delay cursor update if too close to VUPDATE.
++ *
++ * Software keepout workaround to prevent cursor update locking from stalling
++ * out cursor updates indefinitely or from old values from being retained in
++ * the case where the viewport changes in the same frame as the cursor.
++ *
++ * The idea is to calculate the remaining time from VPOS to VUPDATE. If it's
++ * too close to VUPDATE, then stall out until VUPDATE finishes.
++ *
++ * TODO: Optimize cursor programming to be once per frame before VUPDATE
++ *       to avoid the need for this workaround.
++ */
++static void delay_cursor_until_vupdate(struct dc *dc, struct pipe_ctx *pipe_ctx)
++{
++	struct dc_stream_state *stream = pipe_ctx->stream;
++	struct crtc_position position;
++	uint32_t vupdate_start, vupdate_end;
++	unsigned int lines_to_vupdate, us_to_vupdate, vpos;
++	unsigned int us_per_line, us_vupdate;
++
++	if (!dc->hwss.calc_vupdate_position || !dc->hwss.get_position)
++		return;
++
++	if (!pipe_ctx->stream_res.stream_enc || !pipe_ctx->stream_res.tg)
++		return;
++
++	dc->hwss.calc_vupdate_position(dc, pipe_ctx, &vupdate_start,
++				       &vupdate_end);
++
++	dc->hwss.get_position(&pipe_ctx, 1, &position);
++	vpos = position.vertical_count;
++
++	/* Avoid wraparound calculation issues */
++	vupdate_start += stream->timing.v_total;
++	vupdate_end += stream->timing.v_total;
++	vpos += stream->timing.v_total;
++
++	if (vpos <= vupdate_start) {
++		/* VPOS is in VACTIVE or back porch. */
++		lines_to_vupdate = vupdate_start - vpos;
++	} else if (vpos > vupdate_end) {
++		/* VPOS is in the front porch. */
++		return;
++	} else {
++		/* VPOS is in VUPDATE. */
++		lines_to_vupdate = 0;
++	}
++
++	/* Calculate time until VUPDATE in microseconds. */
++	us_per_line =
++		stream->timing.h_total * 10000u / stream->timing.pix_clk_100hz;
++	us_to_vupdate = lines_to_vupdate * us_per_line;
++
++	/* 70 us is a conservative estimate of cursor update time*/
++	if (us_to_vupdate > 70)
++		return;
++
++	/* Stall out until the cursor update completes. */
++	if (vupdate_end < vupdate_start)
++		vupdate_end += stream->timing.v_total;
++	us_vupdate = (vupdate_end - vupdate_start + 1) * us_per_line;
++	udelay(us_to_vupdate + us_vupdate);
++}
++
++void dcn10_cursor_lock(struct dc *dc, struct pipe_ctx *pipe, bool lock)
++{
++	/* cursor lock is per MPCC tree, so only need to lock one pipe per stream */
++	if (!pipe || pipe->top_pipe)
++		return;
++
++	/* Prevent cursor lock from stalling out cursor updates. */
++	if (lock)
++		delay_cursor_until_vupdate(dc, pipe);
++
++	dc->res_pool->mpc->funcs->cursor_lock(dc->res_pool->mpc,
++			pipe->stream_res.opp->inst, lock);
++}
++
+ static bool wait_for_reset_trigger_to_occur(
+ 	struct dc_context *dc_ctx,
+ 	struct timing_generator *tg)
+@@ -2512,7 +2592,6 @@ void dcn10_apply_ctx_for_surface(
+ 	int i;
+ 	struct timing_generator *tg;
+ 	uint32_t underflow_check_delay_us;
+-	bool removed_pipe[4] = { false };
+ 	bool interdependent_update = false;
+ 	struct pipe_ctx *top_pipe_to_program =
+ 			dcn10_find_top_pipe_for_stream(dc, context, stream);
+@@ -2531,11 +2610,6 @@ void dcn10_apply_ctx_for_surface(
+ 	if (underflow_check_delay_us != 0xFFFFFFFF && hws->funcs.did_underflow_occur)
+ 		ASSERT(hws->funcs.did_underflow_occur(dc, top_pipe_to_program));
+ 
+-	if (interdependent_update)
+-		dcn10_lock_all_pipes(dc, context, true);
+-	else
+-		dcn10_pipe_control_lock(dc, top_pipe_to_program, true);
+-
+ 	if (underflow_check_delay_us != 0xFFFFFFFF)
+ 		udelay(underflow_check_delay_us);
+ 
+@@ -2552,18 +2626,8 @@ void dcn10_apply_ctx_for_surface(
+ 		struct pipe_ctx *pipe_ctx = &context->res_ctx.pipe_ctx[i];
+ 		struct pipe_ctx *old_pipe_ctx =
+ 				&dc->current_state->res_ctx.pipe_ctx[i];
+-		/*
+-		 * Powergate reused pipes that are not powergated
+-		 * fairly hacky right now, using opp_id as indicator
+-		 * TODO: After move dc_post to dc_update, this will
+-		 * be removed.
+-		 */
+-		if (pipe_ctx->plane_state && !old_pipe_ctx->plane_state) {
+-			if (old_pipe_ctx->stream_res.tg == tg &&
+-			    old_pipe_ctx->plane_res.hubp &&
+-			    old_pipe_ctx->plane_res.hubp->opp_id != OPP_ID_INVALID)
+-				dc->hwss.disable_plane(dc, old_pipe_ctx);
+-		}
++
++		pipe_ctx->update_flags.raw = 0;
+ 
+ 		if ((!pipe_ctx->plane_state ||
+ 		     pipe_ctx->stream_res.tg != old_pipe_ctx->stream_res.tg) &&
+@@ -2571,7 +2635,7 @@ void dcn10_apply_ctx_for_surface(
+ 		    old_pipe_ctx->stream_res.tg == tg) {
+ 
+ 			hws->funcs.plane_atomic_disconnect(dc, old_pipe_ctx);
+-			removed_pipe[i] = true;
++			pipe_ctx->update_flags.bits.disable = 1;
+ 
+ 			DC_LOG_DC("Reset mpcc for pipe %d\n",
+ 					old_pipe_ctx->pipe_idx);
+@@ -2597,21 +2661,41 @@ void dcn10_apply_ctx_for_surface(
+ 				&pipe_ctx->dlg_regs,
+ 				&pipe_ctx->ttu_regs);
+ 		}
++}
+ 
+-	if (interdependent_update)
+-		dcn10_lock_all_pipes(dc, context, false);
+-	else
+-		dcn10_pipe_control_lock(dc, top_pipe_to_program, false);
++void dcn10_post_unlock_program_front_end(
++		struct dc *dc,
++		struct dc_state *context)
++{
++	int i, j;
+ 
+-	if (num_planes == 0)
+-		false_optc_underflow_wa(dc, stream, tg);
++	DC_LOGGER_INIT(dc->ctx->logger);
++
++	for (i = 0; i < dc->res_pool->pipe_count; i++) {
++		struct pipe_ctx *pipe_ctx = &context->res_ctx.pipe_ctx[i];
++
++		if (!pipe_ctx->top_pipe &&
++			!pipe_ctx->prev_odm_pipe &&
++			pipe_ctx->stream) {
++			struct dc_stream_status *stream_status = NULL;
++			struct timing_generator *tg = pipe_ctx->stream_res.tg;
++
++			for (j = 0; j < context->stream_count; j++) {
++				if (pipe_ctx->stream == context->streams[j])
++					stream_status = &context->stream_status[j];
++			}
++
++			if (context->stream_status[i].plane_count == 0)
++				false_optc_underflow_wa(dc, pipe_ctx->stream, tg);
++		}
++	}
+ 
+ 	for (i = 0; i < dc->res_pool->pipe_count; i++)
+-		if (removed_pipe[i])
++		if (context->res_ctx.pipe_ctx[i].update_flags.bits.disable)
+ 			dc->hwss.disable_plane(dc, &dc->current_state->res_ctx.pipe_ctx[i]);
+ 
+ 	for (i = 0; i < dc->res_pool->pipe_count; i++)
+-		if (removed_pipe[i]) {
++		if (context->res_ctx.pipe_ctx[i].update_flags.bits.disable) {
+ 			dc->hwss.optimize_bandwidth(dc, context);
+ 			break;
+ 		}
+@@ -3127,7 +3211,7 @@ int dcn10_get_vupdate_offset_from_vsync(struct pipe_ctx *pipe_ctx)
+ 	return vertical_line_start;
+ }
+ 
+-static void dcn10_calc_vupdate_position(
++void dcn10_calc_vupdate_position(
+ 		struct dc *dc,
+ 		struct pipe_ctx *pipe_ctx,
+ 		uint32_t *start_line,
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.h b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.h
+index 4d20f6586bb5..42b6e016d71e 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.h
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_hw_sequencer.h
+@@ -34,6 +34,11 @@ struct dc;
+ void dcn10_hw_sequencer_construct(struct dc *dc);
+ 
+ int dcn10_get_vupdate_offset_from_vsync(struct pipe_ctx *pipe_ctx);
++void dcn10_calc_vupdate_position(
++		struct dc *dc,
++		struct pipe_ctx *pipe_ctx,
++		uint32_t *start_line,
++		uint32_t *end_line);
+ void dcn10_setup_vupdate_interrupt(struct dc *dc, struct pipe_ctx *pipe_ctx);
+ enum dc_status dcn10_enable_stream_timing(
+ 		struct pipe_ctx *pipe_ctx,
+@@ -49,6 +54,7 @@ void dcn10_pipe_control_lock(
+ 	struct dc *dc,
+ 	struct pipe_ctx *pipe,
+ 	bool lock);
++void dcn10_cursor_lock(struct dc *dc, struct pipe_ctx *pipe, bool lock);
+ void dcn10_blank_pixel_data(
+ 		struct dc *dc,
+ 		struct pipe_ctx *pipe_ctx,
+@@ -70,11 +76,18 @@ void dcn10_reset_hw_ctx_wrap(
+ 		struct dc *dc,
+ 		struct dc_state *context);
+ void dcn10_disable_plane(struct dc *dc, struct pipe_ctx *pipe_ctx);
++void dcn10_lock_all_pipes(
++		struct dc *dc,
++		struct dc_state *context,
++		bool lock);
+ void dcn10_apply_ctx_for_surface(
+ 		struct dc *dc,
+ 		const struct dc_stream_state *stream,
+ 		int num_planes,
+ 		struct dc_state *context);
++void dcn10_post_unlock_program_front_end(
++		struct dc *dc,
++		struct dc_state *context);
+ void dcn10_hubp_pg_control(
+ 		struct dce_hwseq *hws,
+ 		unsigned int hubp_inst,
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_init.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_init.c
+index e7e5352ec424..0900c861204f 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_init.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_init.c
+@@ -32,6 +32,7 @@ static const struct hw_sequencer_funcs dcn10_funcs = {
+ 	.init_hw = dcn10_init_hw,
+ 	.apply_ctx_to_hw = dce110_apply_ctx_to_hw,
+ 	.apply_ctx_for_surface = dcn10_apply_ctx_for_surface,
++	.post_unlock_program_front_end = dcn10_post_unlock_program_front_end,
+ 	.update_plane_addr = dcn10_update_plane_addr,
+ 	.update_dchub = dcn10_update_dchub,
+ 	.update_pending_status = dcn10_update_pending_status,
+@@ -49,6 +50,8 @@ static const struct hw_sequencer_funcs dcn10_funcs = {
+ 	.disable_audio_stream = dce110_disable_audio_stream,
+ 	.disable_plane = dcn10_disable_plane,
+ 	.pipe_control_lock = dcn10_pipe_control_lock,
++	.cursor_lock = dcn10_cursor_lock,
++	.interdependent_update_lock = dcn10_lock_all_pipes,
+ 	.prepare_bandwidth = dcn10_prepare_bandwidth,
+ 	.optimize_bandwidth = dcn10_optimize_bandwidth,
+ 	.set_drr = dcn10_set_drr,
+@@ -69,6 +72,7 @@ static const struct hw_sequencer_funcs dcn10_funcs = {
+ 	.set_clock = dcn10_set_clock,
+ 	.get_clock = dcn10_get_clock,
+ 	.get_vupdate_offset_from_vsync = dcn10_get_vupdate_offset_from_vsync,
++	.calc_vupdate_position = dcn10_calc_vupdate_position,
+ };
+ 
+ static const struct hwseq_private_funcs dcn10_private_funcs = {
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_mpc.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_mpc.c
+index 04f863499cfb..3fcd408e9103 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_mpc.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_mpc.c
+@@ -223,6 +223,9 @@ struct mpcc *mpc1_insert_plane(
+ 	REG_SET(MPCC_TOP_SEL[mpcc_id], 0, MPCC_TOP_SEL, dpp_id);
+ 	REG_SET(MPCC_OPP_ID[mpcc_id], 0, MPCC_OPP_ID, tree->opp_id);
+ 
++	/* Configure VUPDATE lock set for this MPCC to map to the OPP */
++	REG_SET(MPCC_UPDATE_LOCK_SEL[mpcc_id], 0, MPCC_UPDATE_LOCK_SEL, tree->opp_id);
++
+ 	/* update mpc tree mux setting */
+ 	if (tree->opp_list == insert_above_mpcc) {
+ 		/* insert the toppest mpcc */
+@@ -318,6 +321,7 @@ void mpc1_remove_mpcc(
+ 		REG_SET(MPCC_TOP_SEL[mpcc_id], 0, MPCC_TOP_SEL, 0xf);
+ 		REG_SET(MPCC_BOT_SEL[mpcc_id], 0, MPCC_BOT_SEL, 0xf);
+ 		REG_SET(MPCC_OPP_ID[mpcc_id],  0, MPCC_OPP_ID,  0xf);
++		REG_SET(MPCC_UPDATE_LOCK_SEL[mpcc_id], 0, MPCC_UPDATE_LOCK_SEL, 0xf);
+ 
+ 		/* mark this mpcc as not in use */
+ 		mpc10->mpcc_in_use_mask &= ~(1 << mpcc_id);
+@@ -328,6 +332,7 @@ void mpc1_remove_mpcc(
+ 		REG_SET(MPCC_TOP_SEL[mpcc_id], 0, MPCC_TOP_SEL, 0xf);
+ 		REG_SET(MPCC_BOT_SEL[mpcc_id], 0, MPCC_BOT_SEL, 0xf);
+ 		REG_SET(MPCC_OPP_ID[mpcc_id],  0, MPCC_OPP_ID,  0xf);
++		REG_SET(MPCC_UPDATE_LOCK_SEL[mpcc_id], 0, MPCC_UPDATE_LOCK_SEL, 0xf);
+ 	}
+ }
+ 
+@@ -361,6 +366,7 @@ void mpc1_mpc_init(struct mpc *mpc)
+ 		REG_SET(MPCC_TOP_SEL[mpcc_id], 0, MPCC_TOP_SEL, 0xf);
+ 		REG_SET(MPCC_BOT_SEL[mpcc_id], 0, MPCC_BOT_SEL, 0xf);
+ 		REG_SET(MPCC_OPP_ID[mpcc_id],  0, MPCC_OPP_ID,  0xf);
++		REG_SET(MPCC_UPDATE_LOCK_SEL[mpcc_id], 0, MPCC_UPDATE_LOCK_SEL, 0xf);
+ 
+ 		mpc1_init_mpcc(&(mpc->mpcc_array[mpcc_id]), mpcc_id);
+ 	}
+@@ -381,6 +387,7 @@ void mpc1_mpc_init_single_inst(struct mpc *mpc, unsigned int mpcc_id)
+ 	REG_SET(MPCC_TOP_SEL[mpcc_id], 0, MPCC_TOP_SEL, 0xf);
+ 	REG_SET(MPCC_BOT_SEL[mpcc_id], 0, MPCC_BOT_SEL, 0xf);
+ 	REG_SET(MPCC_OPP_ID[mpcc_id],  0, MPCC_OPP_ID,  0xf);
++	REG_SET(MPCC_UPDATE_LOCK_SEL[mpcc_id], 0, MPCC_UPDATE_LOCK_SEL, 0xf);
+ 
+ 	mpc1_init_mpcc(&(mpc->mpcc_array[mpcc_id]), mpcc_id);
+ 
+@@ -453,6 +460,13 @@ void mpc1_read_mpcc_state(
+ 			MPCC_BUSY, &s->busy);
+ }
+ 
++void mpc1_cursor_lock(struct mpc *mpc, int opp_id, bool lock)
++{
++	struct dcn10_mpc *mpc10 = TO_DCN10_MPC(mpc);
++
++	REG_SET(CUR[opp_id], 0, CUR_VUPDATE_LOCK_SET, lock ? 1 : 0);
++}
++
+ static const struct mpc_funcs dcn10_mpc_funcs = {
+ 	.read_mpcc_state = mpc1_read_mpcc_state,
+ 	.insert_plane = mpc1_insert_plane,
+@@ -464,6 +478,7 @@ static const struct mpc_funcs dcn10_mpc_funcs = {
+ 	.assert_mpcc_idle_before_connect = mpc1_assert_mpcc_idle_before_connect,
+ 	.init_mpcc_list_from_hw = mpc1_init_mpcc_list_from_hw,
+ 	.update_blending = mpc1_update_blending,
++	.cursor_lock = mpc1_cursor_lock,
+ 	.set_denorm = NULL,
+ 	.set_denorm_clamp = NULL,
+ 	.set_output_csc = NULL,
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_mpc.h b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_mpc.h
+index 962a68e322ee..66a4719c22a0 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_mpc.h
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_mpc.h
+@@ -39,11 +39,12 @@
+ 	SRII(MPCC_BG_G_Y, MPCC, inst),\
+ 	SRII(MPCC_BG_R_CR, MPCC, inst),\
+ 	SRII(MPCC_BG_B_CB, MPCC, inst),\
+-	SRII(MPCC_BG_B_CB, MPCC, inst),\
+-	SRII(MPCC_SM_CONTROL, MPCC, inst)
++	SRII(MPCC_SM_CONTROL, MPCC, inst),\
++	SRII(MPCC_UPDATE_LOCK_SEL, MPCC, inst)
+ 
+ #define MPC_OUT_MUX_COMMON_REG_LIST_DCN1_0(inst) \
+-	SRII(MUX, MPC_OUT, inst)
++	SRII(MUX, MPC_OUT, inst),\
++	VUPDATE_SRII(CUR, VUPDATE_LOCK_SET, inst)
+ 
+ #define MPC_COMMON_REG_VARIABLE_LIST \
+ 	uint32_t MPCC_TOP_SEL[MAX_MPCC]; \
+@@ -55,7 +56,9 @@
+ 	uint32_t MPCC_BG_R_CR[MAX_MPCC]; \
+ 	uint32_t MPCC_BG_B_CB[MAX_MPCC]; \
+ 	uint32_t MPCC_SM_CONTROL[MAX_MPCC]; \
+-	uint32_t MUX[MAX_OPP];
++	uint32_t MUX[MAX_OPP]; \
++	uint32_t MPCC_UPDATE_LOCK_SEL[MAX_MPCC]; \
++	uint32_t CUR[MAX_OPP];
+ 
+ #define MPC_COMMON_MASK_SH_LIST_DCN1_0(mask_sh)\
+ 	SF(MPCC0_MPCC_TOP_SEL, MPCC_TOP_SEL, mask_sh),\
+@@ -78,7 +81,8 @@
+ 	SF(MPCC0_MPCC_SM_CONTROL, MPCC_SM_FIELD_ALT, mask_sh),\
+ 	SF(MPCC0_MPCC_SM_CONTROL, MPCC_SM_FORCE_NEXT_FRAME_POL, mask_sh),\
+ 	SF(MPCC0_MPCC_SM_CONTROL, MPCC_SM_FORCE_NEXT_TOP_POL, mask_sh),\
+-	SF(MPC_OUT0_MUX, MPC_OUT_MUX, mask_sh)
++	SF(MPC_OUT0_MUX, MPC_OUT_MUX, mask_sh),\
++	SF(MPCC0_MPCC_UPDATE_LOCK_SEL, MPCC_UPDATE_LOCK_SEL, mask_sh)
+ 
+ #define MPC_REG_FIELD_LIST(type) \
+ 	type MPCC_TOP_SEL;\
+@@ -101,7 +105,9 @@
+ 	type MPCC_SM_FIELD_ALT;\
+ 	type MPCC_SM_FORCE_NEXT_FRAME_POL;\
+ 	type MPCC_SM_FORCE_NEXT_TOP_POL;\
+-	type MPC_OUT_MUX;
++	type MPC_OUT_MUX;\
++	type MPCC_UPDATE_LOCK_SEL;\
++	type CUR_VUPDATE_LOCK_SET;
+ 
+ struct dcn_mpc_registers {
+ 	MPC_COMMON_REG_VARIABLE_LIST
+@@ -192,4 +198,6 @@ void mpc1_read_mpcc_state(
+ 		int mpcc_inst,
+ 		struct mpcc_state *s);
+ 
++void mpc1_cursor_lock(struct mpc *mpc, int opp_id, bool lock);
++
+ #endif
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c
+index 3b71898e859e..e3c4c06ac191 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn10/dcn10_resource.c
+@@ -181,6 +181,14 @@ enum dcn10_clk_src_array_id {
+ 	.reg_name[id] = BASE(mm ## block ## id ## _ ## reg_name ## _BASE_IDX) + \
+ 					mm ## block ## id ## _ ## reg_name
+ 
++#define VUPDATE_SRII(reg_name, block, id)\
++	.reg_name[id] = BASE(mm ## reg_name ## 0 ## _ ## block ## id ## _BASE_IDX) + \
++					mm ## reg_name ## 0 ## _ ## block ## id
++
++/* set field/register/bitfield name */
++#define SFRB(field_name, reg_name, bitfield, post_fix)\
++	.field_name = reg_name ## __ ## bitfield ## post_fix
++
+ /* NBIO */
+ #define NBIO_BASE_INNER(seg) \
+ 	NBIF_BASE__INST0_SEG ## seg
+@@ -419,11 +427,13 @@ static const struct dcn_mpc_registers mpc_regs = {
+ };
+ 
+ static const struct dcn_mpc_shift mpc_shift = {
+-	MPC_COMMON_MASK_SH_LIST_DCN1_0(__SHIFT)
++	MPC_COMMON_MASK_SH_LIST_DCN1_0(__SHIFT),\
++	SFRB(CUR_VUPDATE_LOCK_SET, CUR0_VUPDATE_LOCK_SET0, CUR0_VUPDATE_LOCK_SET, __SHIFT)
+ };
+ 
+ static const struct dcn_mpc_mask mpc_mask = {
+-	MPC_COMMON_MASK_SH_LIST_DCN1_0(_MASK),
++	MPC_COMMON_MASK_SH_LIST_DCN1_0(_MASK),\
++	SFRB(CUR_VUPDATE_LOCK_SET, CUR0_VUPDATE_LOCK_SET0, CUR0_VUPDATE_LOCK_SET, _MASK)
+ };
+ 
+ #define tg_regs(id)\
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
+index ad422e00f9fe..611dac544bfe 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.c
+@@ -1088,40 +1088,18 @@ void dcn20_enable_plane(
+ //	}
+ }
+ 
+-
+-void dcn20_pipe_control_lock_global(
+-		struct dc *dc,
+-		struct pipe_ctx *pipe,
+-		bool lock)
+-{
+-	if (lock) {
+-		pipe->stream_res.tg->funcs->lock_doublebuffer_enable(
+-				pipe->stream_res.tg);
+-		pipe->stream_res.tg->funcs->lock(pipe->stream_res.tg);
+-	} else {
+-		pipe->stream_res.tg->funcs->unlock(pipe->stream_res.tg);
+-		pipe->stream_res.tg->funcs->wait_for_state(pipe->stream_res.tg,
+-				CRTC_STATE_VACTIVE);
+-		pipe->stream_res.tg->funcs->wait_for_state(pipe->stream_res.tg,
+-				CRTC_STATE_VBLANK);
+-		pipe->stream_res.tg->funcs->wait_for_state(pipe->stream_res.tg,
+-				CRTC_STATE_VACTIVE);
+-		pipe->stream_res.tg->funcs->lock_doublebuffer_disable(
+-				pipe->stream_res.tg);
+-	}
+-}
+-
+ void dcn20_pipe_control_lock(
+ 	struct dc *dc,
+ 	struct pipe_ctx *pipe,
+ 	bool lock)
+ {
+ 	bool flip_immediate = false;
++	bool dig_update_required = false;
+ 
+ 	/* use TG master update lock to lock everything on the TG
+ 	 * therefore only top pipe need to lock
+ 	 */
+-	if (pipe->top_pipe)
++	if (!pipe || pipe->top_pipe)
+ 		return;
+ 
+ 	if (pipe->plane_state != NULL)
+@@ -1154,6 +1132,19 @@ void dcn20_pipe_control_lock(
+ 		    (!flip_immediate && pipe->stream_res.gsl_group > 0))
+ 			dcn20_setup_gsl_group_as_lock(dc, pipe, flip_immediate);
+ 
++	if (pipe->stream && pipe->stream->update_flags.bits.dsc_changed)
++		dig_update_required = true;
++
++	/* Need double buffer lock mode in order to synchronize front end pipe
++	 * updates with dig updates.
++	 */
++	if (dig_update_required) {
++		if (lock) {
++			pipe->stream_res.tg->funcs->lock_doublebuffer_enable(
++					pipe->stream_res.tg);
++		}
++	}
++
+ 	if (pipe->plane_state != NULL && pipe->plane_state->triplebuffer_flips) {
+ 		if (lock)
+ 			pipe->stream_res.tg->funcs->triplebuffer_lock(pipe->stream_res.tg);
+@@ -1165,6 +1156,19 @@ void dcn20_pipe_control_lock(
+ 		else
+ 			pipe->stream_res.tg->funcs->unlock(pipe->stream_res.tg);
+ 	}
++
++	if (dig_update_required) {
++		if (!lock) {
++			pipe->stream_res.tg->funcs->wait_for_state(pipe->stream_res.tg,
++					CRTC_STATE_VACTIVE);
++			pipe->stream_res.tg->funcs->wait_for_state(pipe->stream_res.tg,
++					CRTC_STATE_VBLANK);
++			pipe->stream_res.tg->funcs->wait_for_state(pipe->stream_res.tg,
++					CRTC_STATE_VACTIVE);
++			pipe->stream_res.tg->funcs->lock_doublebuffer_disable(
++					pipe->stream_res.tg);
++		}
++	}
+ }
+ 
+ static void dcn20_detect_pipe_changes(struct pipe_ctx *old_pipe, struct pipe_ctx *new_pipe)
+@@ -1536,27 +1540,28 @@ static void dcn20_program_pipe(
+ 	}
+ }
+ 
+-static bool does_pipe_need_lock(struct pipe_ctx *pipe)
+-{
+-	if ((pipe->plane_state && pipe->plane_state->update_flags.raw)
+-			|| pipe->update_flags.raw)
+-		return true;
+-	if (pipe->bottom_pipe)
+-		return does_pipe_need_lock(pipe->bottom_pipe);
+-
+-	return false;
+-}
+-
+ void dcn20_program_front_end_for_ctx(
+ 		struct dc *dc,
+ 		struct dc_state *context)
+ {
+-	const unsigned int TIMEOUT_FOR_PIPE_ENABLE_MS = 100;
+ 	int i;
+ 	struct dce_hwseq *hws = dc->hwseq;
+-	bool pipe_locked[MAX_PIPES] = {false};
+ 	DC_LOGGER_INIT(dc->ctx->logger);
+ 
++	for (i = 0; i < dc->res_pool->pipe_count; i++) {
++		struct pipe_ctx *pipe_ctx = &context->res_ctx.pipe_ctx[i];
++
++		if (!pipe_ctx->top_pipe && !pipe_ctx->prev_odm_pipe && pipe_ctx->plane_state) {
++			ASSERT(!pipe_ctx->plane_state->triplebuffer_flips);
++			if (dc->hwss.program_triplebuffer != NULL &&
++				!dc->debug.disable_tri_buf) {
++				/*turn off triple buffer for full update*/
++				dc->hwss.program_triplebuffer(
++					dc, pipe_ctx, pipe_ctx->plane_state->triplebuffer_flips);
++			}
++		}
++	}
++
+ 	/* Carry over GSL groups in case the context is changing. */
+ 	for (i = 0; i < dc->res_pool->pipe_count; i++)
+ 		if (context->res_ctx.pipe_ctx[i].stream == dc->current_state->res_ctx.pipe_ctx[i].stream)
+@@ -1567,17 +1572,6 @@ void dcn20_program_front_end_for_ctx(
+ 	for (i = 0; i < dc->res_pool->pipe_count; i++)
+ 		dcn20_detect_pipe_changes(&dc->current_state->res_ctx.pipe_ctx[i],
+ 				&context->res_ctx.pipe_ctx[i]);
+-	for (i = 0; i < dc->res_pool->pipe_count; i++)
+-		if (!context->res_ctx.pipe_ctx[i].top_pipe &&
+-				does_pipe_need_lock(&context->res_ctx.pipe_ctx[i])) {
+-			struct pipe_ctx *pipe_ctx = &context->res_ctx.pipe_ctx[i];
+-
+-			if (pipe_ctx->update_flags.bits.tg_changed || pipe_ctx->update_flags.bits.enable)
+-				dc->hwss.pipe_control_lock(dc, pipe_ctx, true);
+-			if (!pipe_ctx->update_flags.bits.enable)
+-				dc->hwss.pipe_control_lock(dc, &dc->current_state->res_ctx.pipe_ctx[i], true);
+-			pipe_locked[i] = true;
+-		}
+ 
+ 	/* OTG blank before disabling all front ends */
+ 	for (i = 0; i < dc->res_pool->pipe_count; i++)
+@@ -1615,17 +1609,16 @@ void dcn20_program_front_end_for_ctx(
+ 				hws->funcs.program_all_writeback_pipes_in_tree(dc, pipe->stream, context);
+ 		}
+ 	}
++}
+ 
+-	/* Unlock all locked pipes */
+-	for (i = 0; i < dc->res_pool->pipe_count; i++)
+-		if (pipe_locked[i]) {
+-			struct pipe_ctx *pipe_ctx = &context->res_ctx.pipe_ctx[i];
++void dcn20_post_unlock_program_front_end(
++		struct dc *dc,
++		struct dc_state *context)
++{
++	int i;
++	const unsigned int TIMEOUT_FOR_PIPE_ENABLE_MS = 100;
+ 
+-			if (pipe_ctx->update_flags.bits.tg_changed || pipe_ctx->update_flags.bits.enable)
+-				dc->hwss.pipe_control_lock(dc, pipe_ctx, false);
+-			if (!pipe_ctx->update_flags.bits.enable)
+-				dc->hwss.pipe_control_lock(dc, &dc->current_state->res_ctx.pipe_ctx[i], false);
+-		}
++	DC_LOGGER_INIT(dc->ctx->logger);
+ 
+ 	for (i = 0; i < dc->res_pool->pipe_count; i++)
+ 		if (context->res_ctx.pipe_ctx[i].update_flags.bits.disable)
+@@ -1655,7 +1648,6 @@ void dcn20_program_front_end_for_ctx(
+ 		dc->res_pool->hubbub->funcs->apply_DEDCN21_147_wa(dc->res_pool->hubbub);
+ }
+ 
+-
+ void dcn20_prepare_bandwidth(
+ 		struct dc *dc,
+ 		struct dc_state *context)
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.h b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.h
+index 02c9be5ebd47..63ce763f148e 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.h
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_hwseq.h
+@@ -35,6 +35,9 @@ bool dcn20_set_shaper_3dlut(
+ void dcn20_program_front_end_for_ctx(
+ 		struct dc *dc,
+ 		struct dc_state *context);
++void dcn20_post_unlock_program_front_end(
++		struct dc *dc,
++		struct dc_state *context);
+ void dcn20_update_plane_addr(const struct dc *dc, struct pipe_ctx *pipe_ctx);
+ void dcn20_update_mpcc(struct dc *dc, struct pipe_ctx *pipe_ctx);
+ bool dcn20_set_input_transfer_func(struct dc *dc, struct pipe_ctx *pipe_ctx,
+@@ -58,10 +61,6 @@ void dcn20_pipe_control_lock(
+ 	struct dc *dc,
+ 	struct pipe_ctx *pipe,
+ 	bool lock);
+-void dcn20_pipe_control_lock_global(
+-		struct dc *dc,
+-		struct pipe_ctx *pipe,
+-		bool lock);
+ void dcn20_prepare_bandwidth(
+ 		struct dc *dc,
+ 		struct dc_state *context);
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_init.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_init.c
+index 5e640f17d3d4..71bfde2cf646 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_init.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_init.c
+@@ -33,6 +33,7 @@ static const struct hw_sequencer_funcs dcn20_funcs = {
+ 	.apply_ctx_to_hw = dce110_apply_ctx_to_hw,
+ 	.apply_ctx_for_surface = NULL,
+ 	.program_front_end_for_ctx = dcn20_program_front_end_for_ctx,
++	.post_unlock_program_front_end = dcn20_post_unlock_program_front_end,
+ 	.update_plane_addr = dcn20_update_plane_addr,
+ 	.update_dchub = dcn10_update_dchub,
+ 	.update_pending_status = dcn10_update_pending_status,
+@@ -50,7 +51,8 @@ static const struct hw_sequencer_funcs dcn20_funcs = {
+ 	.disable_audio_stream = dce110_disable_audio_stream,
+ 	.disable_plane = dcn20_disable_plane,
+ 	.pipe_control_lock = dcn20_pipe_control_lock,
+-	.pipe_control_lock_global = dcn20_pipe_control_lock_global,
++	.interdependent_update_lock = dcn10_lock_all_pipes,
++	.cursor_lock = dcn10_cursor_lock,
+ 	.prepare_bandwidth = dcn20_prepare_bandwidth,
+ 	.optimize_bandwidth = dcn20_optimize_bandwidth,
+ 	.update_bandwidth = dcn20_update_bandwidth,
+@@ -81,6 +83,7 @@ static const struct hw_sequencer_funcs dcn20_funcs = {
+ 	.init_vm_ctx = dcn20_init_vm_ctx,
+ 	.set_flip_control_gsl = dcn20_set_flip_control_gsl,
+ 	.get_vupdate_offset_from_vsync = dcn10_get_vupdate_offset_from_vsync,
++	.calc_vupdate_position = dcn10_calc_vupdate_position,
+ };
+ 
+ static const struct hwseq_private_funcs dcn20_private_funcs = {
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_mpc.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_mpc.c
+index de9c857ab3e9..570dfd9a243f 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_mpc.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_mpc.c
+@@ -545,6 +545,7 @@ const struct mpc_funcs dcn20_mpc_funcs = {
+ 	.mpc_init = mpc1_mpc_init,
+ 	.mpc_init_single_inst = mpc1_mpc_init_single_inst,
+ 	.update_blending = mpc2_update_blending,
++	.cursor_lock = mpc1_cursor_lock,
+ 	.get_mpcc_for_dpp = mpc2_get_mpcc_for_dpp,
+ 	.wait_for_idle = mpc2_assert_idle_mpcc,
+ 	.assert_mpcc_idle_before_connect = mpc2_assert_mpcc_idle_before_connect,
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_mpc.h b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_mpc.h
+index c78fd5123497..496658f420db 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_mpc.h
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_mpc.h
+@@ -179,7 +179,8 @@
+ 	SF(MPC_OUT0_DENORM_CLAMP_G_Y, MPC_OUT_DENORM_CLAMP_MAX_G_Y, mask_sh),\
+ 	SF(MPC_OUT0_DENORM_CLAMP_G_Y, MPC_OUT_DENORM_CLAMP_MIN_G_Y, mask_sh),\
+ 	SF(MPC_OUT0_DENORM_CLAMP_B_CB, MPC_OUT_DENORM_CLAMP_MAX_B_CB, mask_sh),\
+-	SF(MPC_OUT0_DENORM_CLAMP_B_CB, MPC_OUT_DENORM_CLAMP_MIN_B_CB, mask_sh)
++	SF(MPC_OUT0_DENORM_CLAMP_B_CB, MPC_OUT_DENORM_CLAMP_MIN_B_CB, mask_sh),\
++	SF(CUR_VUPDATE_LOCK_SET0, CUR_VUPDATE_LOCK_SET, mask_sh)
+ 
+ /*
+  *	DCN2 MPC_OCSC debug status register:
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+index 1b0bca9587d0..1ba47f3a6857 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn20/dcn20_resource.c
+@@ -506,6 +506,10 @@ enum dcn20_clk_src_array_id {
+ 	.block ## _ ## reg_name[id] = BASE(mm ## block ## id ## _ ## reg_name ## _BASE_IDX) + \
+ 					mm ## block ## id ## _ ## reg_name
+ 
++#define VUPDATE_SRII(reg_name, block, id)\
++	.reg_name[id] = BASE(mm ## reg_name ## _ ## block ## id ## _BASE_IDX) + \
++					mm ## reg_name ## _ ## block ## id
++
+ /* NBIO */
+ #define NBIO_BASE_INNER(seg) \
+ 	NBIO_BASE__INST0_SEG ## seg
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_init.c b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_init.c
+index fddbd59bf4f9..7f53bf724fce 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_init.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_init.c
+@@ -34,6 +34,7 @@ static const struct hw_sequencer_funcs dcn21_funcs = {
+ 	.apply_ctx_to_hw = dce110_apply_ctx_to_hw,
+ 	.apply_ctx_for_surface = NULL,
+ 	.program_front_end_for_ctx = dcn20_program_front_end_for_ctx,
++	.post_unlock_program_front_end = dcn20_post_unlock_program_front_end,
+ 	.update_plane_addr = dcn20_update_plane_addr,
+ 	.update_dchub = dcn10_update_dchub,
+ 	.update_pending_status = dcn10_update_pending_status,
+@@ -51,7 +52,8 @@ static const struct hw_sequencer_funcs dcn21_funcs = {
+ 	.disable_audio_stream = dce110_disable_audio_stream,
+ 	.disable_plane = dcn20_disable_plane,
+ 	.pipe_control_lock = dcn20_pipe_control_lock,
+-	.pipe_control_lock_global = dcn20_pipe_control_lock_global,
++	.interdependent_update_lock = dcn10_lock_all_pipes,
++	.cursor_lock = dcn10_cursor_lock,
+ 	.prepare_bandwidth = dcn20_prepare_bandwidth,
+ 	.optimize_bandwidth = dcn20_optimize_bandwidth,
+ 	.update_bandwidth = dcn20_update_bandwidth,
+@@ -84,6 +86,7 @@ static const struct hw_sequencer_funcs dcn21_funcs = {
+ 	.optimize_pwr_state = dcn21_optimize_pwr_state,
+ 	.exit_optimized_pwr_state = dcn21_exit_optimized_pwr_state,
+ 	.get_vupdate_offset_from_vsync = dcn10_get_vupdate_offset_from_vsync,
++	.calc_vupdate_position = dcn10_calc_vupdate_position,
+ 	.set_cursor_position = dcn10_set_cursor_position,
+ 	.set_cursor_attribute = dcn10_set_cursor_attribute,
+ 	.set_cursor_sdr_white_level = dcn10_set_cursor_sdr_white_level,
+diff --git a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
+index 122d3e734c59..5286cc7d1261 100644
+--- a/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
++++ b/drivers/gpu/drm/amd/display/dc/dcn21/dcn21_resource.c
+@@ -306,6 +306,10 @@ struct _vcs_dpi_soc_bounding_box_st dcn2_1_soc = {
+ 	.block ## _ ## reg_name[id] = BASE(mm ## block ## id ## _ ## reg_name ## _BASE_IDX) + \
+ 					mm ## block ## id ## _ ## reg_name
+ 
++#define VUPDATE_SRII(reg_name, block, id)\
++	.reg_name[id] = BASE(mm ## reg_name ## _ ## block ## id ## _BASE_IDX) + \
++					mm ## reg_name ## _ ## block ## id
++
+ /* NBIO */
+ #define NBIO_BASE_INNER(seg) \
+ 	NBIF0_BASE__INST0_SEG ## seg
+diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw/mpc.h b/drivers/gpu/drm/amd/display/dc/inc/hw/mpc.h
+index 094afc4c8173..50ee8aa7ec3b 100644
+--- a/drivers/gpu/drm/amd/display/dc/inc/hw/mpc.h
++++ b/drivers/gpu/drm/amd/display/dc/inc/hw/mpc.h
+@@ -210,6 +210,22 @@ struct mpc_funcs {
+ 		struct mpcc_blnd_cfg *blnd_cfg,
+ 		int mpcc_id);
+ 
++	/*
++	 * Lock cursor updates for the specified OPP.
++	 * OPP defines the set of MPCC that are locked together for cursor.
++	 *
++	 * Parameters:
++	 * [in] 	mpc		- MPC context.
++	 * [in]     opp_id	- The OPP to lock cursor updates on
++	 * [in]		lock	- lock/unlock the OPP
++	 *
++	 * Return:  void
++	 */
++	void (*cursor_lock)(
++			struct mpc *mpc,
++			int opp_id,
++			bool lock);
++
+ 	struct mpcc* (*get_mpcc_for_dpp)(
+ 			struct mpc_tree *tree,
+ 			int dpp_id);
+diff --git a/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer.h b/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer.h
+index 209118f9f193..08307f3796e3 100644
+--- a/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer.h
++++ b/drivers/gpu/drm/amd/display/dc/inc/hw_sequencer.h
+@@ -66,6 +66,8 @@ struct hw_sequencer_funcs {
+ 			int num_planes, struct dc_state *context);
+ 	void (*program_front_end_for_ctx)(struct dc *dc,
+ 			struct dc_state *context);
++	void (*post_unlock_program_front_end)(struct dc *dc,
++			struct dc_state *context);
+ 	void (*update_plane_addr)(const struct dc *dc,
+ 			struct pipe_ctx *pipe_ctx);
+ 	void (*update_dchub)(struct dce_hwseq *hws,
+@@ -78,17 +80,23 @@ struct hw_sequencer_funcs {
+ 	void (*update_pending_status)(struct pipe_ctx *pipe_ctx);
+ 
+ 	/* Pipe Lock Related */
+-	void (*pipe_control_lock_global)(struct dc *dc,
+-			struct pipe_ctx *pipe, bool lock);
+ 	void (*pipe_control_lock)(struct dc *dc,
+ 			struct pipe_ctx *pipe, bool lock);
++	void (*interdependent_update_lock)(struct dc *dc,
++			struct dc_state *context, bool lock);
+ 	void (*set_flip_control_gsl)(struct pipe_ctx *pipe_ctx,
+ 			bool flip_immediate);
++	void (*cursor_lock)(struct dc *dc, struct pipe_ctx *pipe, bool lock);
+ 
+ 	/* Timing Related */
+ 	void (*get_position)(struct pipe_ctx **pipe_ctx, int num_pipes,
+ 			struct crtc_position *position);
+ 	int (*get_vupdate_offset_from_vsync)(struct pipe_ctx *pipe_ctx);
++	void (*calc_vupdate_position)(
++			struct dc *dc,
++			struct pipe_ctx *pipe_ctx,
++			uint32_t *start_line,
++			uint32_t *end_line);
+ 	void (*enable_per_frame_crtc_position_reset)(struct dc *dc,
+ 			int group_size, struct pipe_ctx *grouped_pipes[]);
+ 	void (*enable_timing_synchronization)(struct dc *dc,
+diff --git a/drivers/gpu/drm/amd/powerplay/amd_powerplay.c b/drivers/gpu/drm/amd/powerplay/amd_powerplay.c
+index e4e5a53b2b4e..8e2acb4df860 100644
+--- a/drivers/gpu/drm/amd/powerplay/amd_powerplay.c
++++ b/drivers/gpu/drm/amd/powerplay/amd_powerplay.c
+@@ -319,12 +319,12 @@ static void pp_dpm_en_umd_pstate(struct pp_hwmgr  *hwmgr,
+ 		if (*level & profile_mode_mask) {
+ 			hwmgr->saved_dpm_level = hwmgr->dpm_level;
+ 			hwmgr->en_umd_pstate = true;
+-			amdgpu_device_ip_set_clockgating_state(hwmgr->adev,
+-						AMD_IP_BLOCK_TYPE_GFX,
+-						AMD_CG_STATE_UNGATE);
+ 			amdgpu_device_ip_set_powergating_state(hwmgr->adev,
+ 					AMD_IP_BLOCK_TYPE_GFX,
+ 					AMD_PG_STATE_UNGATE);
++			amdgpu_device_ip_set_clockgating_state(hwmgr->adev,
++						AMD_IP_BLOCK_TYPE_GFX,
++						AMD_CG_STATE_UNGATE);
+ 		}
+ 	} else {
+ 		/* exit umd pstate, restore level, enable gfx cg*/
+diff --git a/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c b/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
+index 96e81c7bc266..e2565967db07 100644
+--- a/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
++++ b/drivers/gpu/drm/amd/powerplay/amdgpu_smu.c
+@@ -1675,12 +1675,12 @@ static int smu_enable_umd_pstate(void *handle,
+ 		if (*level & profile_mode_mask) {
+ 			smu_dpm_ctx->saved_dpm_level = smu_dpm_ctx->dpm_level;
+ 			smu_dpm_ctx->enable_umd_pstate = true;
+-			amdgpu_device_ip_set_clockgating_state(smu->adev,
+-							       AMD_IP_BLOCK_TYPE_GFX,
+-							       AMD_CG_STATE_UNGATE);
+ 			amdgpu_device_ip_set_powergating_state(smu->adev,
+ 							       AMD_IP_BLOCK_TYPE_GFX,
+ 							       AMD_PG_STATE_UNGATE);
++			amdgpu_device_ip_set_clockgating_state(smu->adev,
++							       AMD_IP_BLOCK_TYPE_GFX,
++							       AMD_CG_STATE_UNGATE);
+ 		}
+ 	} else {
+ 		/* exit umd pstate, restore level, enable gfx cg*/
+diff --git a/drivers/gpu/drm/ingenic/ingenic-drm.c b/drivers/gpu/drm/ingenic/ingenic-drm.c
+index bcba2f024842..e9900e078d51 100644
+--- a/drivers/gpu/drm/ingenic/ingenic-drm.c
++++ b/drivers/gpu/drm/ingenic/ingenic-drm.c
+@@ -328,8 +328,8 @@ static int ingenic_drm_crtc_atomic_check(struct drm_crtc *crtc,
+ 	if (!drm_atomic_crtc_needs_modeset(state))
+ 		return 0;
+ 
+-	if (state->mode.hdisplay > priv->soc_info->max_height ||
+-	    state->mode.vdisplay > priv->soc_info->max_width)
++	if (state->mode.hdisplay > priv->soc_info->max_width ||
++	    state->mode.vdisplay > priv->soc_info->max_height)
+ 		return -EINVAL;
+ 
+ 	rate = clk_round_rate(priv->pix_clk,
+@@ -474,7 +474,7 @@ static int ingenic_drm_encoder_atomic_check(struct drm_encoder *encoder,
+ 
+ static irqreturn_t ingenic_drm_irq_handler(int irq, void *arg)
+ {
+-	struct ingenic_drm *priv = arg;
++	struct ingenic_drm *priv = drm_device_get_priv(arg);
+ 	unsigned int state;
+ 
+ 	regmap_read(priv->map, JZ_REG_LCD_STATE, &state);
+diff --git a/drivers/gpu/drm/meson/meson_drv.c b/drivers/gpu/drm/meson/meson_drv.c
+index b5f5eb7b4bb9..8c2e1b47e81a 100644
+--- a/drivers/gpu/drm/meson/meson_drv.c
++++ b/drivers/gpu/drm/meson/meson_drv.c
+@@ -412,9 +412,7 @@ static int __maybe_unused meson_drv_pm_resume(struct device *dev)
+ 	if (priv->afbcd.ops)
+ 		priv->afbcd.ops->init(priv);
+ 
+-	drm_mode_config_helper_resume(priv->drm);
+-
+-	return 0;
++	return drm_mode_config_helper_resume(priv->drm);
+ }
+ 
+ static int compare_of(struct device *dev, void *data)
+diff --git a/drivers/hwmon/nct7904.c b/drivers/hwmon/nct7904.c
+index 281c81edabc6..dfb122b5e1b7 100644
+--- a/drivers/hwmon/nct7904.c
++++ b/drivers/hwmon/nct7904.c
+@@ -356,6 +356,7 @@ static int nct7904_read_temp(struct device *dev, u32 attr, int channel,
+ 	struct nct7904_data *data = dev_get_drvdata(dev);
+ 	int ret, temp;
+ 	unsigned int reg1, reg2, reg3;
++	s8 temps;
+ 
+ 	switch (attr) {
+ 	case hwmon_temp_input:
+@@ -461,7 +462,8 @@ static int nct7904_read_temp(struct device *dev, u32 attr, int channel,
+ 
+ 	if (ret < 0)
+ 		return ret;
+-	*val = ret * 1000;
++	temps = ret;
++	*val = temps * 1000;
+ 	return 0;
+ }
+ 
+diff --git a/drivers/infiniband/core/rdma_core.c b/drivers/infiniband/core/rdma_core.c
+index bf8e149d3191..e0a5e897e4b1 100644
+--- a/drivers/infiniband/core/rdma_core.c
++++ b/drivers/infiniband/core/rdma_core.c
+@@ -153,9 +153,9 @@ static int uverbs_destroy_uobject(struct ib_uobject *uobj,
+ 	uobj->context = NULL;
+ 
+ 	/*
+-	 * For DESTROY the usecnt is held write locked, the caller is expected
+-	 * to put it unlock and put the object when done with it. Only DESTROY
+-	 * can remove the IDR handle.
++	 * For DESTROY the usecnt is not changed, the caller is expected to
++	 * manage it via uobj_put_destroy(). Only DESTROY can remove the IDR
++	 * handle.
+ 	 */
+ 	if (reason != RDMA_REMOVE_DESTROY)
+ 		atomic_set(&uobj->usecnt, 0);
+@@ -187,7 +187,7 @@ static int uverbs_destroy_uobject(struct ib_uobject *uobj,
+ /*
+  * This calls uverbs_destroy_uobject() using the RDMA_REMOVE_DESTROY
+  * sequence. It should only be used from command callbacks. On success the
+- * caller must pair this with rdma_lookup_put_uobject(LOOKUP_WRITE). This
++ * caller must pair this with uobj_put_destroy(). This
+  * version requires the caller to have already obtained an
+  * LOOKUP_DESTROY uobject kref.
+  */
+@@ -198,6 +198,13 @@ int uobj_destroy(struct ib_uobject *uobj, struct uverbs_attr_bundle *attrs)
+ 
+ 	down_read(&ufile->hw_destroy_rwsem);
+ 
++	/*
++	 * Once the uobject is destroyed by RDMA_REMOVE_DESTROY then it is left
++	 * write locked as the callers put it back with UVERBS_LOOKUP_DESTROY.
++	 * This is because any other concurrent thread can still see the object
++	 * in the xarray due to RCU. Leaving it locked ensures nothing else will
++	 * touch it.
++	 */
+ 	ret = uverbs_try_lock_object(uobj, UVERBS_LOOKUP_WRITE);
+ 	if (ret)
+ 		goto out_unlock;
+@@ -216,7 +223,7 @@ out_unlock:
+ /*
+  * uobj_get_destroy destroys the HW object and returns a handle to the uobj
+  * with a NULL object pointer. The caller must pair this with
+- * uverbs_put_destroy.
++ * uobj_put_destroy().
+  */
+ struct ib_uobject *__uobj_get_destroy(const struct uverbs_api_object *obj,
+ 				      u32 id, struct uverbs_attr_bundle *attrs)
+@@ -250,8 +257,7 @@ int __uobj_perform_destroy(const struct uverbs_api_object *obj, u32 id,
+ 	uobj = __uobj_get_destroy(obj, id, attrs);
+ 	if (IS_ERR(uobj))
+ 		return PTR_ERR(uobj);
+-
+-	rdma_lookup_put_uobject(uobj, UVERBS_LOOKUP_WRITE);
++	uobj_put_destroy(uobj);
+ 	return 0;
+ }
+ 
+diff --git a/drivers/infiniband/hw/i40iw/i40iw_cm.c b/drivers/infiniband/hw/i40iw/i40iw_cm.c
+index bb78d3280acc..fa7a5ff498c7 100644
+--- a/drivers/infiniband/hw/i40iw/i40iw_cm.c
++++ b/drivers/infiniband/hw/i40iw/i40iw_cm.c
+@@ -1987,7 +1987,6 @@ static int i40iw_addr_resolve_neigh(struct i40iw_device *iwdev,
+ 	struct rtable *rt;
+ 	struct neighbour *neigh;
+ 	int rc = arpindex;
+-	struct net_device *netdev = iwdev->netdev;
+ 	__be32 dst_ipaddr = htonl(dst_ip);
+ 	__be32 src_ipaddr = htonl(src_ip);
+ 
+@@ -1997,9 +1996,6 @@ static int i40iw_addr_resolve_neigh(struct i40iw_device *iwdev,
+ 		return rc;
+ 	}
+ 
+-	if (netif_is_bond_slave(netdev))
+-		netdev = netdev_master_upper_dev_get(netdev);
+-
+ 	neigh = dst_neigh_lookup(&rt->dst, &dst_ipaddr);
+ 
+ 	rcu_read_lock();
+@@ -2065,7 +2061,6 @@ static int i40iw_addr_resolve_neigh_ipv6(struct i40iw_device *iwdev,
+ {
+ 	struct neighbour *neigh;
+ 	int rc = arpindex;
+-	struct net_device *netdev = iwdev->netdev;
+ 	struct dst_entry *dst;
+ 	struct sockaddr_in6 dst_addr;
+ 	struct sockaddr_in6 src_addr;
+@@ -2086,9 +2081,6 @@ static int i40iw_addr_resolve_neigh_ipv6(struct i40iw_device *iwdev,
+ 		return rc;
+ 	}
+ 
+-	if (netif_is_bond_slave(netdev))
+-		netdev = netdev_master_upper_dev_get(netdev);
+-
+ 	neigh = dst_neigh_lookup(dst, dst_addr.sin6_addr.in6_u.u6_addr32);
+ 
+ 	rcu_read_lock();
+diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
+index 6fa0a83c19de..9a1747a97fb6 100644
+--- a/drivers/infiniband/hw/mlx5/mr.c
++++ b/drivers/infiniband/hw/mlx5/mr.c
+@@ -1319,6 +1319,7 @@ struct ib_mr *mlx5_ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
+ 
+ 	if (is_odp_mr(mr)) {
+ 		to_ib_umem_odp(mr->umem)->private = mr;
++		init_waitqueue_head(&mr->q_deferred_work);
+ 		atomic_set(&mr->num_deferred_work, 0);
+ 		err = xa_err(xa_store(&dev->odp_mkeys,
+ 				      mlx5_base_mkey(mr->mmkey.key), &mr->mmkey,
+diff --git a/drivers/infiniband/hw/qib/qib_sysfs.c b/drivers/infiniband/hw/qib/qib_sysfs.c
+index 568b21eb6ea1..021df0654ba7 100644
+--- a/drivers/infiniband/hw/qib/qib_sysfs.c
++++ b/drivers/infiniband/hw/qib/qib_sysfs.c
+@@ -760,7 +760,7 @@ int qib_create_port_files(struct ib_device *ibdev, u8 port_num,
+ 		qib_dev_err(dd,
+ 			"Skipping linkcontrol sysfs info, (err %d) port %u\n",
+ 			ret, port_num);
+-		goto bail;
++		goto bail_link;
+ 	}
+ 	kobject_uevent(&ppd->pport_kobj, KOBJ_ADD);
+ 
+@@ -770,7 +770,7 @@ int qib_create_port_files(struct ib_device *ibdev, u8 port_num,
+ 		qib_dev_err(dd,
+ 			"Skipping sl2vl sysfs info, (err %d) port %u\n",
+ 			ret, port_num);
+-		goto bail_link;
++		goto bail_sl;
+ 	}
+ 	kobject_uevent(&ppd->sl2vl_kobj, KOBJ_ADD);
+ 
+@@ -780,7 +780,7 @@ int qib_create_port_files(struct ib_device *ibdev, u8 port_num,
+ 		qib_dev_err(dd,
+ 			"Skipping diag_counters sysfs info, (err %d) port %u\n",
+ 			ret, port_num);
+-		goto bail_sl;
++		goto bail_diagc;
+ 	}
+ 	kobject_uevent(&ppd->diagc_kobj, KOBJ_ADD);
+ 
+@@ -793,7 +793,7 @@ int qib_create_port_files(struct ib_device *ibdev, u8 port_num,
+ 		qib_dev_err(dd,
+ 		 "Skipping Congestion Control sysfs info, (err %d) port %u\n",
+ 		 ret, port_num);
+-		goto bail_diagc;
++		goto bail_cc;
+ 	}
+ 
+ 	kobject_uevent(&ppd->pport_cc_kobj, KOBJ_ADD);
+@@ -854,6 +854,7 @@ void qib_verbs_unregister_sysfs(struct qib_devdata *dd)
+ 				&cc_table_bin_attr);
+ 			kobject_put(&ppd->pport_cc_kobj);
+ 		}
++		kobject_put(&ppd->diagc_kobj);
+ 		kobject_put(&ppd->sl2vl_kobj);
+ 		kobject_put(&ppd->pport_kobj);
+ 	}
+diff --git a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_main.c b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_main.c
+index e580ae9cc55a..780fd2dfc07e 100644
+--- a/drivers/infiniband/hw/vmw_pvrdma/pvrdma_main.c
++++ b/drivers/infiniband/hw/vmw_pvrdma/pvrdma_main.c
+@@ -829,7 +829,7 @@ static int pvrdma_pci_probe(struct pci_dev *pdev,
+ 	    !(pci_resource_flags(pdev, 1) & IORESOURCE_MEM)) {
+ 		dev_err(&pdev->dev, "PCI BAR region not MMIO\n");
+ 		ret = -ENOMEM;
+-		goto err_free_device;
++		goto err_disable_pdev;
+ 	}
+ 
+ 	ret = pci_request_regions(pdev, DRV_NAME);
+diff --git a/drivers/infiniband/ulp/ipoib/ipoib.h b/drivers/infiniband/ulp/ipoib/ipoib.h
+index 2aa3457a30ce..0e5f27caf2b2 100644
+--- a/drivers/infiniband/ulp/ipoib/ipoib.h
++++ b/drivers/infiniband/ulp/ipoib/ipoib.h
+@@ -377,8 +377,12 @@ struct ipoib_dev_priv {
+ 	struct ipoib_rx_buf *rx_ring;
+ 
+ 	struct ipoib_tx_buf *tx_ring;
++	/* cyclic ring variables for managing tx_ring, for UD only */
+ 	unsigned int	     tx_head;
+ 	unsigned int	     tx_tail;
++	/* cyclic ring variables for counting overall outstanding send WRs */
++	unsigned int	     global_tx_head;
++	unsigned int	     global_tx_tail;
+ 	struct ib_sge	     tx_sge[MAX_SKB_FRAGS + 1];
+ 	struct ib_ud_wr      tx_wr;
+ 	struct ib_wc	     send_wc[MAX_SEND_CQE];
+diff --git a/drivers/infiniband/ulp/ipoib/ipoib_cm.c b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
+index c59e00a0881f..9bf0fa30df28 100644
+--- a/drivers/infiniband/ulp/ipoib/ipoib_cm.c
++++ b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
+@@ -756,7 +756,8 @@ void ipoib_cm_send(struct net_device *dev, struct sk_buff *skb, struct ipoib_cm_
+ 		return;
+ 	}
+ 
+-	if ((priv->tx_head - priv->tx_tail) == ipoib_sendq_size - 1) {
++	if ((priv->global_tx_head - priv->global_tx_tail) ==
++	    ipoib_sendq_size - 1) {
+ 		ipoib_dbg(priv, "TX ring 0x%x full, stopping kernel net queue\n",
+ 			  tx->qp->qp_num);
+ 		netif_stop_queue(dev);
+@@ -786,7 +787,7 @@ void ipoib_cm_send(struct net_device *dev, struct sk_buff *skb, struct ipoib_cm_
+ 	} else {
+ 		netif_trans_update(dev);
+ 		++tx->tx_head;
+-		++priv->tx_head;
++		++priv->global_tx_head;
+ 	}
+ }
+ 
+@@ -820,10 +821,11 @@ void ipoib_cm_handle_tx_wc(struct net_device *dev, struct ib_wc *wc)
+ 	netif_tx_lock(dev);
+ 
+ 	++tx->tx_tail;
+-	++priv->tx_tail;
++	++priv->global_tx_tail;
+ 
+ 	if (unlikely(netif_queue_stopped(dev) &&
+-		     (priv->tx_head - priv->tx_tail) <= ipoib_sendq_size >> 1 &&
++		     ((priv->global_tx_head - priv->global_tx_tail) <=
++		      ipoib_sendq_size >> 1) &&
+ 		     test_bit(IPOIB_FLAG_ADMIN_UP, &priv->flags)))
+ 		netif_wake_queue(dev);
+ 
+@@ -1232,8 +1234,9 @@ timeout:
+ 		dev_kfree_skb_any(tx_req->skb);
+ 		netif_tx_lock_bh(p->dev);
+ 		++p->tx_tail;
+-		++priv->tx_tail;
+-		if (unlikely(priv->tx_head - priv->tx_tail == ipoib_sendq_size >> 1) &&
++		++priv->global_tx_tail;
++		if (unlikely((priv->global_tx_head - priv->global_tx_tail) <=
++			     ipoib_sendq_size >> 1) &&
+ 		    netif_queue_stopped(p->dev) &&
+ 		    test_bit(IPOIB_FLAG_ADMIN_UP, &priv->flags))
+ 			netif_wake_queue(p->dev);
+diff --git a/drivers/infiniband/ulp/ipoib/ipoib_ib.c b/drivers/infiniband/ulp/ipoib/ipoib_ib.c
+index c332b4761816..da3c5315bbb5 100644
+--- a/drivers/infiniband/ulp/ipoib/ipoib_ib.c
++++ b/drivers/infiniband/ulp/ipoib/ipoib_ib.c
+@@ -407,9 +407,11 @@ static void ipoib_ib_handle_tx_wc(struct net_device *dev, struct ib_wc *wc)
+ 	dev_kfree_skb_any(tx_req->skb);
+ 
+ 	++priv->tx_tail;
++	++priv->global_tx_tail;
+ 
+ 	if (unlikely(netif_queue_stopped(dev) &&
+-		     ((priv->tx_head - priv->tx_tail) <= ipoib_sendq_size >> 1) &&
++		     ((priv->global_tx_head - priv->global_tx_tail) <=
++		      ipoib_sendq_size >> 1) &&
+ 		     test_bit(IPOIB_FLAG_ADMIN_UP, &priv->flags)))
+ 		netif_wake_queue(dev);
+ 
+@@ -634,7 +636,8 @@ int ipoib_send(struct net_device *dev, struct sk_buff *skb,
+ 	else
+ 		priv->tx_wr.wr.send_flags &= ~IB_SEND_IP_CSUM;
+ 	/* increase the tx_head after send success, but use it for queue state */
+-	if (priv->tx_head - priv->tx_tail == ipoib_sendq_size - 1) {
++	if ((priv->global_tx_head - priv->global_tx_tail) ==
++	    ipoib_sendq_size - 1) {
+ 		ipoib_dbg(priv, "TX ring full, stopping kernel net queue\n");
+ 		netif_stop_queue(dev);
+ 	}
+@@ -662,6 +665,7 @@ int ipoib_send(struct net_device *dev, struct sk_buff *skb,
+ 
+ 		rc = priv->tx_head;
+ 		++priv->tx_head;
++		++priv->global_tx_head;
+ 	}
+ 	return rc;
+ }
+@@ -807,6 +811,7 @@ int ipoib_ib_dev_stop_default(struct net_device *dev)
+ 				ipoib_dma_unmap_tx(priv, tx_req);
+ 				dev_kfree_skb_any(tx_req->skb);
+ 				++priv->tx_tail;
++				++priv->global_tx_tail;
+ 			}
+ 
+ 			for (i = 0; i < ipoib_recvq_size; ++i) {
+diff --git a/drivers/infiniband/ulp/ipoib/ipoib_main.c b/drivers/infiniband/ulp/ipoib/ipoib_main.c
+index 4a0d3a9e72e1..70d6d476ba90 100644
+--- a/drivers/infiniband/ulp/ipoib/ipoib_main.c
++++ b/drivers/infiniband/ulp/ipoib/ipoib_main.c
+@@ -1188,9 +1188,11 @@ static void ipoib_timeout(struct net_device *dev, unsigned int txqueue)
+ 
+ 	ipoib_warn(priv, "transmit timeout: latency %d msecs\n",
+ 		   jiffies_to_msecs(jiffies - dev_trans_start(dev)));
+-	ipoib_warn(priv, "queue stopped %d, tx_head %u, tx_tail %u\n",
+-		   netif_queue_stopped(dev),
+-		   priv->tx_head, priv->tx_tail);
++	ipoib_warn(priv,
++		   "queue stopped %d, tx_head %u, tx_tail %u, global_tx_head %u, global_tx_tail %u\n",
++		   netif_queue_stopped(dev), priv->tx_head, priv->tx_tail,
++		   priv->global_tx_head, priv->global_tx_tail);
++
+ 	/* XXX reset QP, etc. */
+ }
+ 
+@@ -1705,7 +1707,7 @@ static int ipoib_dev_init_default(struct net_device *dev)
+ 		goto out_rx_ring_cleanup;
+ 	}
+ 
+-	/* priv->tx_head, tx_tail & tx_outstanding are already 0 */
++	/* priv->tx_head, tx_tail and global_tx_tail/head are already 0 */
+ 
+ 	if (ipoib_transport_dev_init(dev, priv->ca)) {
+ 		pr_warn("%s: ipoib_transport_dev_init failed\n",
+diff --git a/drivers/input/evdev.c b/drivers/input/evdev.c
+index cb6e3a5f509c..0d57e51b8ba1 100644
+--- a/drivers/input/evdev.c
++++ b/drivers/input/evdev.c
+@@ -326,20 +326,6 @@ static int evdev_fasync(int fd, struct file *file, int on)
+ 	return fasync_helper(fd, file, on, &client->fasync);
+ }
+ 
+-static int evdev_flush(struct file *file, fl_owner_t id)
+-{
+-	struct evdev_client *client = file->private_data;
+-	struct evdev *evdev = client->evdev;
+-
+-	mutex_lock(&evdev->mutex);
+-
+-	if (evdev->exist && !client->revoked)
+-		input_flush_device(&evdev->handle, file);
+-
+-	mutex_unlock(&evdev->mutex);
+-	return 0;
+-}
+-
+ static void evdev_free(struct device *dev)
+ {
+ 	struct evdev *evdev = container_of(dev, struct evdev, dev);
+@@ -453,6 +439,10 @@ static int evdev_release(struct inode *inode, struct file *file)
+ 	unsigned int i;
+ 
+ 	mutex_lock(&evdev->mutex);
++
++	if (evdev->exist && !client->revoked)
++		input_flush_device(&evdev->handle, file);
++
+ 	evdev_ungrab(evdev, client);
+ 	mutex_unlock(&evdev->mutex);
+ 
+@@ -1310,7 +1300,6 @@ static const struct file_operations evdev_fops = {
+ 	.compat_ioctl	= evdev_ioctl_compat,
+ #endif
+ 	.fasync		= evdev_fasync,
+-	.flush		= evdev_flush,
+ 	.llseek		= no_llseek,
+ };
+ 
+diff --git a/drivers/input/joystick/xpad.c b/drivers/input/joystick/xpad.c
+index 6b40a1c68f9f..c77cdb3b62b5 100644
+--- a/drivers/input/joystick/xpad.c
++++ b/drivers/input/joystick/xpad.c
+@@ -458,6 +458,16 @@ static const u8 xboxone_fw2015_init[] = {
+ 	0x05, 0x20, 0x00, 0x01, 0x00
+ };
+ 
++/*
++ * This packet is required for Xbox One S (0x045e:0x02ea)
++ * and Xbox One Elite Series 2 (0x045e:0x0b00) pads to
++ * initialize the controller that was previously used in
++ * Bluetooth mode.
++ */
++static const u8 xboxone_s_init[] = {
++	0x05, 0x20, 0x00, 0x0f, 0x06
++};
++
+ /*
+  * This packet is required for the Titanfall 2 Xbox One pads
+  * (0x0e6f:0x0165) to finish initialization and for Hori pads
+@@ -516,6 +526,8 @@ static const struct xboxone_init_packet xboxone_init_packets[] = {
+ 	XBOXONE_INIT_PKT(0x0e6f, 0x0165, xboxone_hori_init),
+ 	XBOXONE_INIT_PKT(0x0f0d, 0x0067, xboxone_hori_init),
+ 	XBOXONE_INIT_PKT(0x0000, 0x0000, xboxone_fw2015_init),
++	XBOXONE_INIT_PKT(0x045e, 0x02ea, xboxone_s_init),
++	XBOXONE_INIT_PKT(0x045e, 0x0b00, xboxone_s_init),
+ 	XBOXONE_INIT_PKT(0x0e6f, 0x0000, xboxone_pdp_init1),
+ 	XBOXONE_INIT_PKT(0x0e6f, 0x0000, xboxone_pdp_init2),
+ 	XBOXONE_INIT_PKT(0x24c6, 0x541a, xboxone_rumblebegin_init),
+diff --git a/drivers/input/keyboard/dlink-dir685-touchkeys.c b/drivers/input/keyboard/dlink-dir685-touchkeys.c
+index b0ead7199c40..a69dcc3bd30c 100644
+--- a/drivers/input/keyboard/dlink-dir685-touchkeys.c
++++ b/drivers/input/keyboard/dlink-dir685-touchkeys.c
+@@ -143,7 +143,7 @@ MODULE_DEVICE_TABLE(of, dir685_tk_of_match);
+ 
+ static struct i2c_driver dir685_tk_i2c_driver = {
+ 	.driver = {
+-		.name	= "dlin-dir685-touchkeys",
++		.name	= "dlink-dir685-touchkeys",
+ 		.of_match_table = of_match_ptr(dir685_tk_of_match),
+ 	},
+ 	.probe		= dir685_tk_probe,
+diff --git a/drivers/input/rmi4/rmi_driver.c b/drivers/input/rmi4/rmi_driver.c
+index 190b9974526b..258d5fe3d395 100644
+--- a/drivers/input/rmi4/rmi_driver.c
++++ b/drivers/input/rmi4/rmi_driver.c
+@@ -205,7 +205,7 @@ static irqreturn_t rmi_irq_fn(int irq, void *dev_id)
+ 
+ 	if (count) {
+ 		kfree(attn_data.data);
+-		attn_data.data = NULL;
++		drvdata->attn_data.data = NULL;
+ 	}
+ 
+ 	if (!kfifo_is_empty(&drvdata->attn_fifo))
+@@ -1210,7 +1210,8 @@ static int rmi_driver_probe(struct device *dev)
+ 	if (data->input) {
+ 		rmi_driver_set_input_name(rmi_dev, data->input);
+ 		if (!rmi_dev->xport->input) {
+-			if (input_register_device(data->input)) {
++			retval = input_register_device(data->input);
++			if (retval) {
+ 				dev_err(dev, "%s: Failed to register input device.\n",
+ 					__func__);
+ 				goto err_destroy_functions;
+diff --git a/drivers/input/serio/i8042-x86ia64io.h b/drivers/input/serio/i8042-x86ia64io.h
+index 08e919dbeb5d..7e048b557462 100644
+--- a/drivers/input/serio/i8042-x86ia64io.h
++++ b/drivers/input/serio/i8042-x86ia64io.h
+@@ -662,6 +662,13 @@ static const struct dmi_system_id __initconst i8042_dmi_reset_table[] = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "P65xRP"),
+ 		},
+ 	},
++	{
++		/* Lenovo ThinkPad Twist S230u */
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
++			DMI_MATCH(DMI_PRODUCT_NAME, "33474HU"),
++		},
++	},
+ 	{ }
+ };
+ 
+diff --git a/drivers/input/touchscreen/usbtouchscreen.c b/drivers/input/touchscreen/usbtouchscreen.c
+index 16d70201de4a..397cb1d3f481 100644
+--- a/drivers/input/touchscreen/usbtouchscreen.c
++++ b/drivers/input/touchscreen/usbtouchscreen.c
+@@ -182,6 +182,7 @@ static const struct usb_device_id usbtouch_devices[] = {
+ #endif
+ 
+ #ifdef CONFIG_TOUCHSCREEN_USB_IRTOUCH
++	{USB_DEVICE(0x255e, 0x0001), .driver_info = DEVTYPE_IRTOUCH},
+ 	{USB_DEVICE(0x595a, 0x0001), .driver_info = DEVTYPE_IRTOUCH},
+ 	{USB_DEVICE(0x6615, 0x0001), .driver_info = DEVTYPE_IRTOUCH},
+ 	{USB_DEVICE(0x6615, 0x0012), .driver_info = DEVTYPE_IRTOUCH_HIRES},
+diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
+index 22b28076d48e..b09de25df02e 100644
+--- a/drivers/iommu/iommu.c
++++ b/drivers/iommu/iommu.c
+@@ -509,7 +509,7 @@ struct iommu_group *iommu_group_alloc(void)
+ 				   NULL, "%d", group->id);
+ 	if (ret) {
+ 		ida_simple_remove(&iommu_group_ida, group->id);
+-		kfree(group);
++		kobject_put(&group->kobj);
+ 		return ERR_PTR(ret);
+ 	}
+ 
+diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
+index 32db16f6debc..2d19291ebc84 100644
+--- a/drivers/mmc/core/block.c
++++ b/drivers/mmc/core/block.c
+@@ -2475,8 +2475,8 @@ static int mmc_rpmb_chrdev_release(struct inode *inode, struct file *filp)
+ 	struct mmc_rpmb_data *rpmb = container_of(inode->i_cdev,
+ 						  struct mmc_rpmb_data, chrdev);
+ 
+-	put_device(&rpmb->dev);
+ 	mmc_blk_put(rpmb->md);
++	put_device(&rpmb->dev);
+ 
+ 	return 0;
+ }
+diff --git a/drivers/net/bonding/bond_sysfs_slave.c b/drivers/net/bonding/bond_sysfs_slave.c
+index 007481557191..9b8346638f69 100644
+--- a/drivers/net/bonding/bond_sysfs_slave.c
++++ b/drivers/net/bonding/bond_sysfs_slave.c
+@@ -149,8 +149,10 @@ int bond_sysfs_slave_add(struct slave *slave)
+ 
+ 	err = kobject_init_and_add(&slave->kobj, &slave_ktype,
+ 				   &(slave->dev->dev.kobj), "bonding_slave");
+-	if (err)
++	if (err) {
++		kobject_put(&slave->kobj);
+ 		return err;
++	}
+ 
+ 	for (a = slave_attrs; *a; ++a) {
+ 		err = sysfs_create_file(&slave->kobj, &((*a)->attr));
+diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c
+index 0123498242b9..b95425a63a13 100644
+--- a/drivers/net/dsa/mt7530.c
++++ b/drivers/net/dsa/mt7530.c
+@@ -639,11 +639,8 @@ mt7530_cpu_port_enable(struct mt7530_priv *priv,
+ 	mt7530_write(priv, MT7530_PVC_P(port),
+ 		     PORT_SPEC_TAG);
+ 
+-	/* Disable auto learning on the cpu port */
+-	mt7530_set(priv, MT7530_PSC_P(port), SA_DIS);
+-
+-	/* Unknown unicast frame fordwarding to the cpu port */
+-	mt7530_set(priv, MT7530_MFC, UNU_FFP(BIT(port)));
++	/* Unknown multicast frame forwarding to the cpu port */
++	mt7530_rmw(priv, MT7530_MFC, UNM_FFP_MASK, UNM_FFP(BIT(port)));
+ 
+ 	/* Set CPU port number */
+ 	if (priv->id == ID_MT7621)
+@@ -1247,8 +1244,6 @@ mt7530_setup(struct dsa_switch *ds)
+ 	/* Enable and reset MIB counters */
+ 	mt7530_mib_reset(ds);
+ 
+-	mt7530_clear(priv, MT7530_MFC, UNU_FFP_MASK);
+-
+ 	for (i = 0; i < MT7530_NUM_PORTS; i++) {
+ 		/* Disable forwarding by default on all ports */
+ 		mt7530_rmw(priv, MT7530_PCR_P(i), PCR_MATRIX_MASK,
+diff --git a/drivers/net/dsa/mt7530.h b/drivers/net/dsa/mt7530.h
+index 756140b7dfd5..0e7e36d8f994 100644
+--- a/drivers/net/dsa/mt7530.h
++++ b/drivers/net/dsa/mt7530.h
+@@ -31,6 +31,7 @@ enum {
+ #define MT7530_MFC			0x10
+ #define  BC_FFP(x)			(((x) & 0xff) << 24)
+ #define  UNM_FFP(x)			(((x) & 0xff) << 16)
++#define  UNM_FFP_MASK			UNM_FFP(~0)
+ #define  UNU_FFP(x)			(((x) & 0xff) << 8)
+ #define  UNU_FFP_MASK			UNU_FFP(~0)
+ #define  CPU_EN				BIT(7)
+diff --git a/drivers/net/dsa/ocelot/felix.c b/drivers/net/dsa/ocelot/felix.c
+index a7780c06fa65..b74580e87be8 100644
+--- a/drivers/net/dsa/ocelot/felix.c
++++ b/drivers/net/dsa/ocelot/felix.c
+@@ -385,6 +385,7 @@ static int felix_init_structs(struct felix *felix, int num_phys_ports)
+ 	struct ocelot *ocelot = &felix->ocelot;
+ 	phy_interface_t *port_phy_modes;
+ 	resource_size_t switch_base;
++	struct resource res;
+ 	int port, i, err;
+ 
+ 	ocelot->num_phys_ports = num_phys_ports;
+@@ -416,17 +417,16 @@ static int felix_init_structs(struct felix *felix, int num_phys_ports)
+ 
+ 	for (i = 0; i < TARGET_MAX; i++) {
+ 		struct regmap *target;
+-		struct resource *res;
+ 
+ 		if (!felix->info->target_io_res[i].name)
+ 			continue;
+ 
+-		res = &felix->info->target_io_res[i];
+-		res->flags = IORESOURCE_MEM;
+-		res->start += switch_base;
+-		res->end += switch_base;
++		memcpy(&res, &felix->info->target_io_res[i], sizeof(res));
++		res.flags = IORESOURCE_MEM;
++		res.start += switch_base;
++		res.end += switch_base;
+ 
+-		target = ocelot_regmap_init(ocelot, res);
++		target = ocelot_regmap_init(ocelot, &res);
+ 		if (IS_ERR(target)) {
+ 			dev_err(ocelot->dev,
+ 				"Failed to map device memory space\n");
+@@ -447,7 +447,6 @@ static int felix_init_structs(struct felix *felix, int num_phys_ports)
+ 	for (port = 0; port < num_phys_ports; port++) {
+ 		struct ocelot_port *ocelot_port;
+ 		void __iomem *port_regs;
+-		struct resource *res;
+ 
+ 		ocelot_port = devm_kzalloc(ocelot->dev,
+ 					   sizeof(struct ocelot_port),
+@@ -459,12 +458,12 @@ static int felix_init_structs(struct felix *felix, int num_phys_ports)
+ 			return -ENOMEM;
+ 		}
+ 
+-		res = &felix->info->port_io_res[port];
+-		res->flags = IORESOURCE_MEM;
+-		res->start += switch_base;
+-		res->end += switch_base;
++		memcpy(&res, &felix->info->port_io_res[port], sizeof(res));
++		res.flags = IORESOURCE_MEM;
++		res.start += switch_base;
++		res.end += switch_base;
+ 
+-		port_regs = devm_ioremap_resource(ocelot->dev, res);
++		port_regs = devm_ioremap_resource(ocelot->dev, &res);
+ 		if (IS_ERR(port_regs)) {
+ 			dev_err(ocelot->dev,
+ 				"failed to map registers for port %d\n", port);
+diff --git a/drivers/net/dsa/ocelot/felix.h b/drivers/net/dsa/ocelot/felix.h
+index 8771d40324f1..2c024cc901d4 100644
+--- a/drivers/net/dsa/ocelot/felix.h
++++ b/drivers/net/dsa/ocelot/felix.h
+@@ -8,9 +8,9 @@
+ 
+ /* Platform-specific information */
+ struct felix_info {
+-	struct resource			*target_io_res;
+-	struct resource			*port_io_res;
+-	struct resource			*imdio_res;
++	const struct resource		*target_io_res;
++	const struct resource		*port_io_res;
++	const struct resource		*imdio_res;
+ 	const struct reg_field		*regfields;
+ 	const u32 *const		*map;
+ 	const struct ocelot_ops		*ops;
+diff --git a/drivers/net/dsa/ocelot/felix_vsc9959.c b/drivers/net/dsa/ocelot/felix_vsc9959.c
+index edc1a67c002b..50074da3a1a0 100644
+--- a/drivers/net/dsa/ocelot/felix_vsc9959.c
++++ b/drivers/net/dsa/ocelot/felix_vsc9959.c
+@@ -328,10 +328,8 @@ static const u32 *vsc9959_regmap[] = {
+ 	[GCB]	= vsc9959_gcb_regmap,
+ };
+ 
+-/* Addresses are relative to the PCI device's base address and
+- * will be fixed up at ioremap time.
+- */
+-static struct resource vsc9959_target_io_res[] = {
++/* Addresses are relative to the PCI device's base address */
++static const struct resource vsc9959_target_io_res[] = {
+ 	[ANA] = {
+ 		.start	= 0x0280000,
+ 		.end	= 0x028ffff,
+@@ -374,7 +372,7 @@ static struct resource vsc9959_target_io_res[] = {
+ 	},
+ };
+ 
+-static struct resource vsc9959_port_io_res[] = {
++static const struct resource vsc9959_port_io_res[] = {
+ 	{
+ 		.start	= 0x0100000,
+ 		.end	= 0x010ffff,
+@@ -410,7 +408,7 @@ static struct resource vsc9959_port_io_res[] = {
+ /* Port MAC 0 Internal MDIO bus through which the SerDes acting as an
+  * SGMII/QSGMII MAC PCS can be found.
+  */
+-static struct resource vsc9959_imdio_res = {
++static const struct resource vsc9959_imdio_res = {
+ 	.start		= 0x8030,
+ 	.end		= 0x8040,
+ 	.name		= "imdio",
+@@ -984,7 +982,7 @@ static int vsc9959_mdio_bus_alloc(struct ocelot *ocelot)
+ 	struct device *dev = ocelot->dev;
+ 	resource_size_t imdio_base;
+ 	void __iomem *imdio_regs;
+-	struct resource *res;
++	struct resource res;
+ 	struct enetc_hw *hw;
+ 	struct mii_bus *bus;
+ 	int port;
+@@ -1001,12 +999,12 @@ static int vsc9959_mdio_bus_alloc(struct ocelot *ocelot)
+ 	imdio_base = pci_resource_start(felix->pdev,
+ 					felix->info->imdio_pci_bar);
+ 
+-	res = felix->info->imdio_res;
+-	res->flags = IORESOURCE_MEM;
+-	res->start += imdio_base;
+-	res->end += imdio_base;
++	memcpy(&res, felix->info->imdio_res, sizeof(res));
++	res.flags = IORESOURCE_MEM;
++	res.start += imdio_base;
++	res.end += imdio_base;
+ 
+-	imdio_regs = devm_ioremap_resource(dev, res);
++	imdio_regs = devm_ioremap_resource(dev, &res);
+ 	if (IS_ERR(imdio_regs)) {
+ 		dev_err(dev, "failed to map internal MDIO registers\n");
+ 		return PTR_ERR(imdio_regs);
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+index d0ddd08c4112..fce4e26c36cf 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+@@ -4184,14 +4184,12 @@ static int bnxt_hwrm_do_send_msg(struct bnxt *bp, void *msg, u32 msg_len,
+ 	int i, intr_process, rc, tmo_count;
+ 	struct input *req = msg;
+ 	u32 *data = msg;
+-	__le32 *resp_len;
+ 	u8 *valid;
+ 	u16 cp_ring_id, len = 0;
+ 	struct hwrm_err_output *resp = bp->hwrm_cmd_resp_addr;
+ 	u16 max_req_len = BNXT_HWRM_MAX_REQ_LEN;
+ 	struct hwrm_short_input short_input = {0};
+ 	u32 doorbell_offset = BNXT_GRCPF_REG_CHIMP_COMM_TRIGGER;
+-	u8 *resp_addr = (u8 *)bp->hwrm_cmd_resp_addr;
+ 	u32 bar_offset = BNXT_GRCPF_REG_CHIMP_COMM;
+ 	u16 dst = BNXT_HWRM_CHNL_CHIMP;
+ 
+@@ -4209,7 +4207,6 @@ static int bnxt_hwrm_do_send_msg(struct bnxt *bp, void *msg, u32 msg_len,
+ 		bar_offset = BNXT_GRCPF_REG_KONG_COMM;
+ 		doorbell_offset = BNXT_GRCPF_REG_KONG_COMM_TRIGGER;
+ 		resp = bp->hwrm_cmd_kong_resp_addr;
+-		resp_addr = (u8 *)bp->hwrm_cmd_kong_resp_addr;
+ 	}
+ 
+ 	memset(resp, 0, PAGE_SIZE);
+@@ -4278,7 +4275,6 @@ static int bnxt_hwrm_do_send_msg(struct bnxt *bp, void *msg, u32 msg_len,
+ 	tmo_count = HWRM_SHORT_TIMEOUT_COUNTER;
+ 	timeout = timeout - HWRM_SHORT_MIN_TIMEOUT * HWRM_SHORT_TIMEOUT_COUNTER;
+ 	tmo_count += DIV_ROUND_UP(timeout, HWRM_MIN_TIMEOUT);
+-	resp_len = (__le32 *)(resp_addr + HWRM_RESP_LEN_OFFSET);
+ 
+ 	if (intr_process) {
+ 		u16 seq_id = bp->hwrm_intr_seq_id;
+@@ -4306,9 +4302,8 @@ static int bnxt_hwrm_do_send_msg(struct bnxt *bp, void *msg, u32 msg_len,
+ 					   le16_to_cpu(req->req_type));
+ 			return -EBUSY;
+ 		}
+-		len = (le32_to_cpu(*resp_len) & HWRM_RESP_LEN_MASK) >>
+-		      HWRM_RESP_LEN_SFT;
+-		valid = resp_addr + len - 1;
++		len = le16_to_cpu(resp->resp_len);
++		valid = ((u8 *)resp) + len - 1;
+ 	} else {
+ 		int j;
+ 
+@@ -4319,8 +4314,7 @@ static int bnxt_hwrm_do_send_msg(struct bnxt *bp, void *msg, u32 msg_len,
+ 			 */
+ 			if (test_bit(BNXT_STATE_FW_FATAL_COND, &bp->state))
+ 				return -EBUSY;
+-			len = (le32_to_cpu(*resp_len) & HWRM_RESP_LEN_MASK) >>
+-			      HWRM_RESP_LEN_SFT;
++			len = le16_to_cpu(resp->resp_len);
+ 			if (len)
+ 				break;
+ 			/* on first few passes, just barely sleep */
+@@ -4342,7 +4336,7 @@ static int bnxt_hwrm_do_send_msg(struct bnxt *bp, void *msg, u32 msg_len,
+ 		}
+ 
+ 		/* Last byte of resp contains valid bit */
+-		valid = resp_addr + len - 1;
++		valid = ((u8 *)resp) + len - 1;
+ 		for (j = 0; j < HWRM_VALID_BIT_DELAY_USEC; j++) {
+ 			/* make sure we read from updated DMA memory */
+ 			dma_rmb();
+@@ -9324,7 +9318,7 @@ static void __bnxt_close_nic(struct bnxt *bp, bool irq_re_init,
+ 	bnxt_free_skbs(bp);
+ 
+ 	/* Save ring stats before shutdown */
+-	if (bp->bnapi)
++	if (bp->bnapi && irq_re_init)
+ 		bnxt_get_ring_stats(bp, &bp->net_stats_prev);
+ 	if (irq_re_init) {
+ 		bnxt_free_irq(bp);
+diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+index ef0268649822..f76c42652e1a 100644
+--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h
++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+@@ -654,11 +654,6 @@ struct nqe_cn {
+ #define HWRM_CMD_TIMEOUT		(bp->hwrm_cmd_timeout)
+ #define HWRM_RESET_TIMEOUT		((HWRM_CMD_TIMEOUT) * 4)
+ #define HWRM_COREDUMP_TIMEOUT		((HWRM_CMD_TIMEOUT) * 12)
+-#define HWRM_RESP_ERR_CODE_MASK		0xffff
+-#define HWRM_RESP_LEN_OFFSET		4
+-#define HWRM_RESP_LEN_MASK		0xffff0000
+-#define HWRM_RESP_LEN_SFT		16
+-#define HWRM_RESP_VALID_MASK		0xff000000
+ #define BNXT_HWRM_REQ_MAX_SIZE		128
+ #define BNXT_HWRM_REQS_PER_PAGE		(BNXT_PAGE_SIZE /	\
+ 					 BNXT_HWRM_REQ_MAX_SIZE)
+diff --git a/drivers/net/ethernet/freescale/Kconfig b/drivers/net/ethernet/freescale/Kconfig
+index 2bd7ace0a953..bfc6bfe94d0a 100644
+--- a/drivers/net/ethernet/freescale/Kconfig
++++ b/drivers/net/ethernet/freescale/Kconfig
+@@ -77,6 +77,7 @@ config UCC_GETH
+ 	depends on QUICC_ENGINE && PPC32
+ 	select FSL_PQ_MDIO
+ 	select PHYLIB
++	select FIXED_PHY
+ 	---help---
+ 	  This driver supports the Gigabit Ethernet mode of the QUICC Engine,
+ 	  which is available on some Freescale SOCs.
+@@ -90,6 +91,7 @@ config GIANFAR
+ 	depends on HAS_DMA
+ 	select FSL_PQ_MDIO
+ 	select PHYLIB
++	select FIXED_PHY
+ 	select CRC32
+ 	---help---
+ 	  This driver supports the Gigabit TSEC on the MPC83xx, MPC85xx,
+diff --git a/drivers/net/ethernet/freescale/dpaa/Kconfig b/drivers/net/ethernet/freescale/dpaa/Kconfig
+index 3b325733a4f8..0a54c7e0e4ae 100644
+--- a/drivers/net/ethernet/freescale/dpaa/Kconfig
++++ b/drivers/net/ethernet/freescale/dpaa/Kconfig
+@@ -3,6 +3,7 @@ menuconfig FSL_DPAA_ETH
+ 	tristate "DPAA Ethernet"
+ 	depends on FSL_DPAA && FSL_FMAN
+ 	select PHYLIB
++	select FIXED_PHY
+ 	select FSL_FMAN_MAC
+ 	---help---
+ 	  Data Path Acceleration Architecture Ethernet driver,
+diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
+index ca74a684a904..ab337632793b 100644
+--- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
++++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
+@@ -2902,7 +2902,7 @@ static int dpaa_eth_probe(struct platform_device *pdev)
+ 	}
+ 
+ 	/* Do this here, so we can be verbose early */
+-	SET_NETDEV_DEV(net_dev, dev);
++	SET_NETDEV_DEV(net_dev, dev->parent);
+ 	dev_set_drvdata(dev, net_dev);
+ 
+ 	priv = netdev_priv(net_dev);
+diff --git a/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c b/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
+index 4344a59c823f..6122057d60c0 100644
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_cls.c
+@@ -1070,7 +1070,7 @@ void mvpp2_cls_oversize_rxq_set(struct mvpp2_port *port)
+ 		    (port->first_rxq >> MVPP2_CLS_OVERSIZE_RXQ_LOW_BITS));
+ 
+ 	val = mvpp2_read(port->priv, MVPP2_CLS_SWFWD_PCTRL_REG);
+-	val |= MVPP2_CLS_SWFWD_PCTRL_MASK(port->id);
++	val &= ~MVPP2_CLS_SWFWD_PCTRL_MASK(port->id);
+ 	mvpp2_write(port->priv, MVPP2_CLS_SWFWD_PCTRL_REG, val);
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx4/fw.c b/drivers/net/ethernet/mellanox/mlx4/fw.c
+index 6e501af0e532..f6ff9620a137 100644
+--- a/drivers/net/ethernet/mellanox/mlx4/fw.c
++++ b/drivers/net/ethernet/mellanox/mlx4/fw.c
+@@ -2734,7 +2734,7 @@ void mlx4_opreq_action(struct work_struct *work)
+ 		if (err) {
+ 			mlx4_err(dev, "Failed to retrieve required operation: %d\n",
+ 				 err);
+-			return;
++			goto out;
+ 		}
+ 		MLX4_GET(modifier, outbox, GET_OP_REQ_MODIFIER_OFFSET);
+ 		MLX4_GET(token, outbox, GET_OP_REQ_TOKEN_OFFSET);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+index cede5bdfd598..7a77fe40af3a 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+@@ -848,6 +848,14 @@ static void free_msg(struct mlx5_core_dev *dev, struct mlx5_cmd_msg *msg);
+ static void mlx5_free_cmd_msg(struct mlx5_core_dev *dev,
+ 			      struct mlx5_cmd_msg *msg);
+ 
++static bool opcode_allowed(struct mlx5_cmd *cmd, u16 opcode)
++{
++	if (cmd->allowed_opcode == CMD_ALLOWED_OPCODE_ALL)
++		return true;
++
++	return cmd->allowed_opcode == opcode;
++}
++
+ static void cmd_work_handler(struct work_struct *work)
+ {
+ 	struct mlx5_cmd_work_ent *ent = container_of(work, struct mlx5_cmd_work_ent, work);
+@@ -861,6 +869,7 @@ static void cmd_work_handler(struct work_struct *work)
+ 	int alloc_ret;
+ 	int cmd_mode;
+ 
++	complete(&ent->handling);
+ 	sem = ent->page_queue ? &cmd->pages_sem : &cmd->sem;
+ 	down(sem);
+ 	if (!ent->page_queue) {
+@@ -913,7 +922,9 @@ static void cmd_work_handler(struct work_struct *work)
+ 
+ 	/* Skip sending command to fw if internal error */
+ 	if (pci_channel_offline(dev->pdev) ||
+-	    dev->state == MLX5_DEVICE_STATE_INTERNAL_ERROR) {
++	    dev->state == MLX5_DEVICE_STATE_INTERNAL_ERROR ||
++	    cmd->state != MLX5_CMDIF_STATE_UP ||
++	    !opcode_allowed(&dev->cmd, ent->op)) {
+ 		u8 status = 0;
+ 		u32 drv_synd;
+ 
+@@ -978,6 +989,11 @@ static int wait_func(struct mlx5_core_dev *dev, struct mlx5_cmd_work_ent *ent)
+ 	struct mlx5_cmd *cmd = &dev->cmd;
+ 	int err;
+ 
++	if (!wait_for_completion_timeout(&ent->handling, timeout) &&
++	    cancel_work_sync(&ent->work)) {
++		ent->ret = -ECANCELED;
++		goto out_err;
++	}
+ 	if (cmd->mode == CMD_MODE_POLLING || ent->polling) {
+ 		wait_for_completion(&ent->done);
+ 	} else if (!wait_for_completion_timeout(&ent->done, timeout)) {
+@@ -985,12 +1001,17 @@ static int wait_func(struct mlx5_core_dev *dev, struct mlx5_cmd_work_ent *ent)
+ 		mlx5_cmd_comp_handler(dev, 1UL << ent->idx, true);
+ 	}
+ 
++out_err:
+ 	err = ent->ret;
+ 
+ 	if (err == -ETIMEDOUT) {
+ 		mlx5_core_warn(dev, "%s(0x%x) timeout. Will cause a leak of a command resource\n",
+ 			       mlx5_command_str(msg_to_opcode(ent->in)),
+ 			       msg_to_opcode(ent->in));
++	} else if (err == -ECANCELED) {
++		mlx5_core_warn(dev, "%s(0x%x) canceled on out of queue timeout.\n",
++			       mlx5_command_str(msg_to_opcode(ent->in)),
++			       msg_to_opcode(ent->in));
+ 	}
+ 	mlx5_core_dbg(dev, "err %d, delivery status %s(%d)\n",
+ 		      err, deliv_status_to_str(ent->status), ent->status);
+@@ -1026,6 +1047,7 @@ static int mlx5_cmd_invoke(struct mlx5_core_dev *dev, struct mlx5_cmd_msg *in,
+ 	ent->token = token;
+ 	ent->polling = force_polling;
+ 
++	init_completion(&ent->handling);
+ 	if (!callback)
+ 		init_completion(&ent->done);
+ 
+@@ -1045,6 +1067,8 @@ static int mlx5_cmd_invoke(struct mlx5_core_dev *dev, struct mlx5_cmd_msg *in,
+ 	err = wait_func(dev, ent);
+ 	if (err == -ETIMEDOUT)
+ 		goto out;
++	if (err == -ECANCELED)
++		goto out_free;
+ 
+ 	ds = ent->ts2 - ent->ts1;
+ 	op = MLX5_GET(mbox_in, in->first.data, opcode);
+@@ -1391,6 +1415,22 @@ static void create_debugfs_files(struct mlx5_core_dev *dev)
+ 	mlx5_cmdif_debugfs_init(dev);
+ }
+ 
++void mlx5_cmd_allowed_opcode(struct mlx5_core_dev *dev, u16 opcode)
++{
++	struct mlx5_cmd *cmd = &dev->cmd;
++	int i;
++
++	for (i = 0; i < cmd->max_reg_cmds; i++)
++		down(&cmd->sem);
++	down(&cmd->pages_sem);
++
++	cmd->allowed_opcode = opcode;
++
++	up(&cmd->pages_sem);
++	for (i = 0; i < cmd->max_reg_cmds; i++)
++		up(&cmd->sem);
++}
++
+ static void mlx5_cmd_change_mod(struct mlx5_core_dev *dev, int mode)
+ {
+ 	struct mlx5_cmd *cmd = &dev->cmd;
+@@ -1667,12 +1707,14 @@ static int cmd_exec(struct mlx5_core_dev *dev, void *in, int in_size, void *out,
+ 	int err;
+ 	u8 status = 0;
+ 	u32 drv_synd;
++	u16 opcode;
+ 	u8 token;
+ 
++	opcode = MLX5_GET(mbox_in, in, opcode);
+ 	if (pci_channel_offline(dev->pdev) ||
+-	    dev->state == MLX5_DEVICE_STATE_INTERNAL_ERROR) {
+-		u16 opcode = MLX5_GET(mbox_in, in, opcode);
+-
++	    dev->state == MLX5_DEVICE_STATE_INTERNAL_ERROR ||
++	    dev->cmd.state != MLX5_CMDIF_STATE_UP ||
++	    !opcode_allowed(&dev->cmd, opcode)) {
+ 		err = mlx5_internal_err_ret_value(dev, opcode, &drv_synd, &status);
+ 		MLX5_SET(mbox_out, out, status, status);
+ 		MLX5_SET(mbox_out, out, syndrome, drv_synd);
+@@ -1937,6 +1979,7 @@ int mlx5_cmd_init(struct mlx5_core_dev *dev)
+ 		goto err_free_page;
+ 	}
+ 
++	cmd->state = MLX5_CMDIF_STATE_DOWN;
+ 	cmd->checksum_disabled = 1;
+ 	cmd->max_reg_cmds = (1 << cmd->log_sz) - 1;
+ 	cmd->bitmask = (1UL << cmd->max_reg_cmds) - 1;
+@@ -1974,6 +2017,7 @@ int mlx5_cmd_init(struct mlx5_core_dev *dev)
+ 	mlx5_core_dbg(dev, "descriptor at dma 0x%llx\n", (unsigned long long)(cmd->dma));
+ 
+ 	cmd->mode = CMD_MODE_POLLING;
++	cmd->allowed_opcode = CMD_ALLOWED_OPCODE_ALL;
+ 
+ 	create_msg_cache(dev);
+ 
+@@ -2013,3 +2057,10 @@ void mlx5_cmd_cleanup(struct mlx5_core_dev *dev)
+ 	dma_pool_destroy(cmd->pool);
+ }
+ EXPORT_SYMBOL(mlx5_cmd_cleanup);
++
++void mlx5_cmd_set_state(struct mlx5_core_dev *dev,
++			enum mlx5_cmdif_state cmdif_state)
++{
++	dev->cmd.state = cmdif_state;
++}
++EXPORT_SYMBOL(mlx5_cmd_set_state);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
+index 5a5e6a21c6e1..80c579948152 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
+@@ -1104,7 +1104,7 @@ void mlx5e_close_drop_rq(struct mlx5e_rq *drop_rq);
+ int mlx5e_create_indirect_rqt(struct mlx5e_priv *priv);
+ 
+ int mlx5e_create_indirect_tirs(struct mlx5e_priv *priv, bool inner_ttc);
+-void mlx5e_destroy_indirect_tirs(struct mlx5e_priv *priv, bool inner_ttc);
++void mlx5e_destroy_indirect_tirs(struct mlx5e_priv *priv);
+ 
+ int mlx5e_create_direct_rqts(struct mlx5e_priv *priv, struct mlx5e_tir *tirs);
+ void mlx5e_destroy_direct_rqts(struct mlx5e_priv *priv, struct mlx5e_tir *tirs);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.c b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.c
+index 46725cd743a3..7d1985fa0d4f 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_accel/ktls.c
+@@ -69,8 +69,8 @@ static void mlx5e_ktls_del(struct net_device *netdev,
+ 	struct mlx5e_ktls_offload_context_tx *tx_priv =
+ 		mlx5e_get_ktls_tx_priv_ctx(tls_ctx);
+ 
+-	mlx5_ktls_destroy_key(priv->mdev, tx_priv->key_id);
+ 	mlx5e_destroy_tis(priv->mdev, tx_priv->tisn);
++	mlx5_ktls_destroy_key(priv->mdev, tx_priv->key_id);
+ 	kvfree(tx_priv);
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+index d02db5aebac4..4fef7587165c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+@@ -2747,7 +2747,8 @@ void mlx5e_modify_tirs_hash(struct mlx5e_priv *priv, void *in, int inlen)
+ 		mlx5_core_modify_tir(mdev, priv->indir_tir[tt].tirn, in, inlen);
+ 	}
+ 
+-	if (!mlx5e_tunnel_inner_ft_supported(priv->mdev))
++	/* Verify inner tirs resources allocated */
++	if (!priv->inner_indir_tir[0].tirn)
+ 		return;
+ 
+ 	for (tt = 0; tt < MLX5E_NUM_INDIR_TIRS; tt++) {
+@@ -3394,14 +3395,15 @@ out:
+ 	return err;
+ }
+ 
+-void mlx5e_destroy_indirect_tirs(struct mlx5e_priv *priv, bool inner_ttc)
++void mlx5e_destroy_indirect_tirs(struct mlx5e_priv *priv)
+ {
+ 	int i;
+ 
+ 	for (i = 0; i < MLX5E_NUM_INDIR_TIRS; i++)
+ 		mlx5e_destroy_tir(priv->mdev, &priv->indir_tir[i]);
+ 
+-	if (!inner_ttc || !mlx5e_tunnel_inner_ft_supported(priv->mdev))
++	/* Verify inner tirs resources allocated */
++	if (!priv->inner_indir_tir[0].tirn)
+ 		return;
+ 
+ 	for (i = 0; i < MLX5E_NUM_INDIR_TIRS; i++)
+@@ -5107,7 +5109,7 @@ err_destroy_xsk_rqts:
+ err_destroy_direct_tirs:
+ 	mlx5e_destroy_direct_tirs(priv, priv->direct_tir);
+ err_destroy_indirect_tirs:
+-	mlx5e_destroy_indirect_tirs(priv, true);
++	mlx5e_destroy_indirect_tirs(priv);
+ err_destroy_direct_rqts:
+ 	mlx5e_destroy_direct_rqts(priv, priv->direct_tir);
+ err_destroy_indirect_rqts:
+@@ -5126,7 +5128,7 @@ static void mlx5e_cleanup_nic_rx(struct mlx5e_priv *priv)
+ 	mlx5e_destroy_direct_tirs(priv, priv->xsk_tir);
+ 	mlx5e_destroy_direct_rqts(priv, priv->xsk_tir);
+ 	mlx5e_destroy_direct_tirs(priv, priv->direct_tir);
+-	mlx5e_destroy_indirect_tirs(priv, true);
++	mlx5e_destroy_indirect_tirs(priv);
+ 	mlx5e_destroy_direct_rqts(priv, priv->direct_tir);
+ 	mlx5e_destroy_rqt(priv, &priv->indir_rqt);
+ 	mlx5e_close_drop_rq(&priv->drop_rq);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+index 2ad0d09cc9bd..c3c3d89d9153 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rep.c
+@@ -1667,7 +1667,7 @@ err_destroy_ttc_table:
+ err_destroy_direct_tirs:
+ 	mlx5e_destroy_direct_tirs(priv, priv->direct_tir);
+ err_destroy_indirect_tirs:
+-	mlx5e_destroy_indirect_tirs(priv, false);
++	mlx5e_destroy_indirect_tirs(priv);
+ err_destroy_direct_rqts:
+ 	mlx5e_destroy_direct_rqts(priv, priv->direct_tir);
+ err_destroy_indirect_rqts:
+@@ -1684,7 +1684,7 @@ static void mlx5e_cleanup_rep_rx(struct mlx5e_priv *priv)
+ 	mlx5_del_flow_rules(rpriv->vport_rx_rule);
+ 	mlx5e_destroy_ttc_table(priv, &priv->fs.ttc);
+ 	mlx5e_destroy_direct_tirs(priv, priv->direct_tir);
+-	mlx5e_destroy_indirect_tirs(priv, false);
++	mlx5e_destroy_indirect_tirs(priv);
+ 	mlx5e_destroy_direct_rqts(priv, priv->direct_tir);
+ 	mlx5e_destroy_rqt(priv, &priv->indir_rqt);
+ 	mlx5e_close_drop_rq(&priv->drop_rq);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
+index ee60383adc5b..c2b801b435cf 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
+@@ -538,10 +538,9 @@ bool mlx5e_poll_tx_cq(struct mlx5e_cq *cq, int napi_budget)
+ void mlx5e_free_txqsq_descs(struct mlx5e_txqsq *sq)
+ {
+ 	struct mlx5e_tx_wqe_info *wi;
++	u32 dma_fifo_cc, nbytes = 0;
++	u16 ci, sqcc, npkts = 0;
+ 	struct sk_buff *skb;
+-	u32 dma_fifo_cc;
+-	u16 sqcc;
+-	u16 ci;
+ 	int i;
+ 
+ 	sqcc = sq->cc;
+@@ -566,11 +565,15 @@ void mlx5e_free_txqsq_descs(struct mlx5e_txqsq *sq)
+ 		}
+ 
+ 		dev_kfree_skb_any(skb);
++		npkts++;
++		nbytes += wi->num_bytes;
+ 		sqcc += wi->num_wqebbs;
+ 	}
+ 
+ 	sq->dma_fifo_cc = dma_fifo_cc;
+ 	sq->cc = sqcc;
++
++	netdev_tx_completed_queue(sq->txq, npkts, nbytes);
+ }
+ 
+ #ifdef CONFIG_MLX5_CORE_IPOIB
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eq.c b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
+index cccea3a8eddd..ce6c621af043 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/eq.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
+@@ -611,11 +611,13 @@ static int create_async_eqs(struct mlx5_core_dev *dev)
+ 		.nent = MLX5_NUM_CMD_EQE,
+ 		.mask[0] = 1ull << MLX5_EVENT_TYPE_CMD,
+ 	};
++	mlx5_cmd_allowed_opcode(dev, MLX5_CMD_OP_CREATE_EQ);
+ 	err = setup_async_eq(dev, &table->cmd_eq, &param, "cmd");
+ 	if (err)
+ 		goto err1;
+ 
+ 	mlx5_cmd_use_events(dev);
++	mlx5_cmd_allowed_opcode(dev, CMD_ALLOWED_OPCODE_ALL);
+ 
+ 	param = (struct mlx5_eq_param) {
+ 		.irq_index = 0,
+@@ -645,6 +647,7 @@ err2:
+ 	mlx5_cmd_use_polling(dev);
+ 	cleanup_async_eq(dev, &table->cmd_eq, "cmd");
+ err1:
++	mlx5_cmd_allowed_opcode(dev, CMD_ALLOWED_OPCODE_ALL);
+ 	mlx5_eq_notifier_unregister(dev, &table->cq_err_nb);
+ 	return err;
+ }
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/events.c b/drivers/net/ethernet/mellanox/mlx5/core/events.c
+index 8bcf3426b9c6..3ce17c3d7a00 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/events.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/events.c
+@@ -346,8 +346,10 @@ int mlx5_events_init(struct mlx5_core_dev *dev)
+ 	events->dev = dev;
+ 	dev->priv.events = events;
+ 	events->wq = create_singlethread_workqueue("mlx5_events");
+-	if (!events->wq)
++	if (!events->wq) {
++		kfree(events);
+ 		return -ENOMEM;
++	}
+ 	INIT_WORK(&events->pcie_core_work, mlx5_pcie_event);
+ 
+ 	return 0;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+index 9dc24241dc91..cf09cfc33234 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+@@ -323,14 +323,13 @@ static void tree_put_node(struct fs_node *node, bool locked)
+ 		if (node->del_hw_func)
+ 			node->del_hw_func(node);
+ 		if (parent_node) {
+-			/* Only root namespace doesn't have parent and we just
+-			 * need to free its node.
+-			 */
+ 			down_write_ref_node(parent_node, locked);
+ 			list_del_init(&node->list);
+ 			if (node->del_sw_func)
+ 				node->del_sw_func(node);
+ 			up_write_ref_node(parent_node, locked);
++		} else if (node->del_sw_func) {
++			node->del_sw_func(node);
+ 		} else {
+ 			kfree(node);
+ 		}
+@@ -417,6 +416,12 @@ static void del_sw_ns(struct fs_node *node)
+ 
+ static void del_sw_prio(struct fs_node *node)
+ {
++	struct mlx5_flow_root_namespace *root_ns;
++	struct mlx5_flow_namespace *ns;
++
++	fs_get_obj(ns, node);
++	root_ns = container_of(ns, struct mlx5_flow_root_namespace, ns);
++	mutex_destroy(&root_ns->chain_lock);
+ 	kfree(node);
+ }
+ 
+@@ -447,8 +452,10 @@ static void del_sw_flow_table(struct fs_node *node)
+ 	fs_get_obj(ft, node);
+ 
+ 	rhltable_destroy(&ft->fgs_hash);
+-	fs_get_obj(prio, ft->node.parent);
+-	prio->num_ft--;
++	if (ft->node.parent) {
++		fs_get_obj(prio, ft->node.parent);
++		prio->num_ft--;
++	}
+ 	kfree(ft);
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c b/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c
+index 56078b23f1a0..0a334ceba7b1 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/ipoib/ipoib.c
+@@ -396,7 +396,7 @@ static int mlx5i_init_rx(struct mlx5e_priv *priv)
+ err_destroy_direct_tirs:
+ 	mlx5e_destroy_direct_tirs(priv, priv->direct_tir);
+ err_destroy_indirect_tirs:
+-	mlx5e_destroy_indirect_tirs(priv, true);
++	mlx5e_destroy_indirect_tirs(priv);
+ err_destroy_direct_rqts:
+ 	mlx5e_destroy_direct_rqts(priv, priv->direct_tir);
+ err_destroy_indirect_rqts:
+@@ -412,7 +412,7 @@ static void mlx5i_cleanup_rx(struct mlx5e_priv *priv)
+ {
+ 	mlx5i_destroy_flow_steering(priv);
+ 	mlx5e_destroy_direct_tirs(priv, priv->direct_tir);
+-	mlx5e_destroy_indirect_tirs(priv, true);
++	mlx5e_destroy_indirect_tirs(priv);
+ 	mlx5e_destroy_direct_rqts(priv, priv->direct_tir);
+ 	mlx5e_destroy_rqt(priv, &priv->indir_rqt);
+ 	mlx5e_close_drop_rq(&priv->drop_rq);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+index f554cfddcf4e..4a08e4eef283 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+@@ -962,6 +962,8 @@ static int mlx5_function_setup(struct mlx5_core_dev *dev, bool boot)
+ 		goto err_cmd_cleanup;
+ 	}
+ 
++	mlx5_cmd_set_state(dev, MLX5_CMDIF_STATE_UP);
++
+ 	err = mlx5_core_enable_hca(dev, 0);
+ 	if (err) {
+ 		mlx5_core_err(dev, "enable hca failed\n");
+@@ -1023,6 +1025,7 @@ reclaim_boot_pages:
+ err_disable_hca:
+ 	mlx5_core_disable_hca(dev, 0);
+ err_cmd_cleanup:
++	mlx5_cmd_set_state(dev, MLX5_CMDIF_STATE_DOWN);
+ 	mlx5_cmd_cleanup(dev);
+ 
+ 	return err;
+@@ -1040,6 +1043,7 @@ static int mlx5_function_teardown(struct mlx5_core_dev *dev, bool boot)
+ 	}
+ 	mlx5_reclaim_startup_pages(dev);
+ 	mlx5_core_disable_hca(dev, 0);
++	mlx5_cmd_set_state(dev, MLX5_CMDIF_STATE_DOWN);
+ 	mlx5_cmd_cleanup(dev);
+ 
+ 	return 0;
+@@ -1179,7 +1183,7 @@ int mlx5_load_one(struct mlx5_core_dev *dev, bool boot)
+ 
+ 	err = mlx5_function_setup(dev, boot);
+ 	if (err)
+-		goto out;
++		goto err_function;
+ 
+ 	if (boot) {
+ 		err = mlx5_init_once(dev);
+@@ -1225,6 +1229,7 @@ err_load:
+ 		mlx5_cleanup_once(dev);
+ function_teardown:
+ 	mlx5_function_teardown(dev, boot);
++err_function:
+ 	dev->state = MLX5_DEVICE_STATE_INTERNAL_ERROR;
+ 	mutex_unlock(&dev->intf_state_mutex);
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
+index 7358b5bc7eb6..58ebabe99876 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
+@@ -4043,6 +4043,7 @@ static void mlxsw_sp_ports_remove(struct mlxsw_sp *mlxsw_sp)
+ 			mlxsw_sp_port_remove(mlxsw_sp, i);
+ 	mlxsw_sp_cpu_port_remove(mlxsw_sp);
+ 	kfree(mlxsw_sp->ports);
++	mlxsw_sp->ports = NULL;
+ }
+ 
+ static int mlxsw_sp_ports_create(struct mlxsw_sp *mlxsw_sp)
+@@ -4079,6 +4080,7 @@ err_port_create:
+ 	mlxsw_sp_cpu_port_remove(mlxsw_sp);
+ err_cpu_port_create:
+ 	kfree(mlxsw_sp->ports);
++	mlxsw_sp->ports = NULL;
+ 	return err;
+ }
+ 
+@@ -4200,6 +4202,14 @@ static int mlxsw_sp_local_ports_offset(struct mlxsw_core *mlxsw_core,
+ 	return mlxsw_core_res_get(mlxsw_core, local_ports_in_x_res_id);
+ }
+ 
++static struct mlxsw_sp_port *
++mlxsw_sp_port_get_by_local_port(struct mlxsw_sp *mlxsw_sp, u8 local_port)
++{
++	if (mlxsw_sp->ports && mlxsw_sp->ports[local_port])
++		return mlxsw_sp->ports[local_port];
++	return NULL;
++}
++
+ static int mlxsw_sp_port_split(struct mlxsw_core *mlxsw_core, u8 local_port,
+ 			       unsigned int count,
+ 			       struct netlink_ext_ack *extack)
+@@ -4213,7 +4223,7 @@ static int mlxsw_sp_port_split(struct mlxsw_core *mlxsw_core, u8 local_port,
+ 	int i;
+ 	int err;
+ 
+-	mlxsw_sp_port = mlxsw_sp->ports[local_port];
++	mlxsw_sp_port = mlxsw_sp_port_get_by_local_port(mlxsw_sp, local_port);
+ 	if (!mlxsw_sp_port) {
+ 		dev_err(mlxsw_sp->bus_info->dev, "Port number \"%d\" does not exist\n",
+ 			local_port);
+@@ -4308,7 +4318,7 @@ static int mlxsw_sp_port_unsplit(struct mlxsw_core *mlxsw_core, u8 local_port,
+ 	int offset;
+ 	int i;
+ 
+-	mlxsw_sp_port = mlxsw_sp->ports[local_port];
++	mlxsw_sp_port = mlxsw_sp_port_get_by_local_port(mlxsw_sp, local_port);
+ 	if (!mlxsw_sp_port) {
+ 		dev_err(mlxsw_sp->bus_info->dev, "Port number \"%d\" does not exist\n",
+ 			local_port);
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/switchx2.c b/drivers/net/ethernet/mellanox/mlxsw/switchx2.c
+index f0e98ec8f1ee..c69232445ab7 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/switchx2.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/switchx2.c
+@@ -1259,6 +1259,7 @@ static void mlxsw_sx_ports_remove(struct mlxsw_sx *mlxsw_sx)
+ 		if (mlxsw_sx_port_created(mlxsw_sx, i))
+ 			mlxsw_sx_port_remove(mlxsw_sx, i);
+ 	kfree(mlxsw_sx->ports);
++	mlxsw_sx->ports = NULL;
+ }
+ 
+ static int mlxsw_sx_ports_create(struct mlxsw_sx *mlxsw_sx)
+@@ -1293,6 +1294,7 @@ err_port_module_info_get:
+ 		if (mlxsw_sx_port_created(mlxsw_sx, i))
+ 			mlxsw_sx_port_remove(mlxsw_sx, i);
+ 	kfree(mlxsw_sx->ports);
++	mlxsw_sx->ports = NULL;
+ 	return err;
+ }
+ 
+@@ -1376,6 +1378,12 @@ static int mlxsw_sx_port_type_set(struct mlxsw_core *mlxsw_core, u8 local_port,
+ 	u8 module, width;
+ 	int err;
+ 
++	if (!mlxsw_sx->ports || !mlxsw_sx->ports[local_port]) {
++		dev_err(mlxsw_sx->bus_info->dev, "Port number \"%d\" does not exist\n",
++			local_port);
++		return -EINVAL;
++	}
++
+ 	if (new_type == DEVLINK_PORT_TYPE_AUTO)
+ 		return -EOPNOTSUPP;
+ 
+diff --git a/drivers/net/ethernet/microchip/encx24j600.c b/drivers/net/ethernet/microchip/encx24j600.c
+index 39925e4bf2ec..b25a13da900a 100644
+--- a/drivers/net/ethernet/microchip/encx24j600.c
++++ b/drivers/net/ethernet/microchip/encx24j600.c
+@@ -1070,7 +1070,7 @@ static int encx24j600_spi_probe(struct spi_device *spi)
+ 	if (unlikely(ret)) {
+ 		netif_err(priv, probe, ndev, "Error %d initializing card encx24j600 card\n",
+ 			  ret);
+-		goto out_free;
++		goto out_stop;
+ 	}
+ 
+ 	eidled = encx24j600_read_reg(priv, EIDLED);
+@@ -1088,6 +1088,8 @@ static int encx24j600_spi_probe(struct spi_device *spi)
+ 
+ out_unregister:
+ 	unregister_netdev(priv->ndev);
++out_stop:
++	kthread_stop(priv->kworker_task);
+ out_free:
+ 	free_netdev(ndev);
+ 
+@@ -1100,6 +1102,7 @@ static int encx24j600_spi_remove(struct spi_device *spi)
+ 	struct encx24j600_priv *priv = dev_get_drvdata(&spi->dev);
+ 
+ 	unregister_netdev(priv->ndev);
++	kthread_stop(priv->kworker_task);
+ 
+ 	free_netdev(priv->ndev);
+ 
+diff --git a/drivers/net/ethernet/mscc/ocelot.c b/drivers/net/ethernet/mscc/ocelot.c
+index 419e2ce2eac0..d5aa4e725853 100644
+--- a/drivers/net/ethernet/mscc/ocelot.c
++++ b/drivers/net/ethernet/mscc/ocelot.c
+@@ -1460,7 +1460,7 @@ static void ocelot_port_attr_ageing_set(struct ocelot *ocelot, int port,
+ 					unsigned long ageing_clock_t)
+ {
+ 	unsigned long ageing_jiffies = clock_t_to_jiffies(ageing_clock_t);
+-	u32 ageing_time = jiffies_to_msecs(ageing_jiffies) / 1000;
++	u32 ageing_time = jiffies_to_msecs(ageing_jiffies);
+ 
+ 	ocelot_set_ageing_time(ocelot, ageing_time);
+ }
+diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c
+index 2a533280b124..29b9c728a65e 100644
+--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c
++++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_83xx_hw.c
+@@ -3651,7 +3651,7 @@ int qlcnic_83xx_interrupt_test(struct net_device *netdev)
+ 	ahw->diag_cnt = 0;
+ 	ret = qlcnic_alloc_mbx_args(&cmd, adapter, QLCNIC_CMD_INTRPT_TEST);
+ 	if (ret)
+-		goto fail_diag_irq;
++		goto fail_mbx_args;
+ 
+ 	if (adapter->flags & QLCNIC_MSIX_ENABLED)
+ 		intrpt_id = ahw->intr_tbl[0].id;
+@@ -3681,6 +3681,8 @@ int qlcnic_83xx_interrupt_test(struct net_device *netdev)
+ 
+ done:
+ 	qlcnic_free_mbx_args(&cmd);
++
++fail_mbx_args:
+ 	qlcnic_83xx_diag_free_res(netdev, drv_sds_rings);
+ 
+ fail_diag_irq:
+diff --git a/drivers/net/ethernet/realtek/r8169_main.c b/drivers/net/ethernet/realtek/r8169_main.c
+index 07a6b609f741..6e4fe2566f6b 100644
+--- a/drivers/net/ethernet/realtek/r8169_main.c
++++ b/drivers/net/ethernet/realtek/r8169_main.c
+@@ -1044,6 +1044,13 @@ static u16 rtl_ephy_read(struct rtl8169_private *tp, int reg_addr)
+ 		RTL_R32(tp, EPHYAR) & EPHYAR_DATA_MASK : ~0;
+ }
+ 
++static void r8168fp_adjust_ocp_cmd(struct rtl8169_private *tp, u32 *cmd, int type)
++{
++	/* based on RTL8168FP_OOBMAC_BASE in vendor driver */
++	if (tp->mac_version == RTL_GIGA_MAC_VER_52 && type == ERIAR_OOB)
++		*cmd |= 0x7f0 << 18;
++}
++
+ DECLARE_RTL_COND(rtl_eriar_cond)
+ {
+ 	return RTL_R32(tp, ERIAR) & ERIAR_FLAG;
+@@ -1052,9 +1059,12 @@ DECLARE_RTL_COND(rtl_eriar_cond)
+ static void _rtl_eri_write(struct rtl8169_private *tp, int addr, u32 mask,
+ 			   u32 val, int type)
+ {
++	u32 cmd = ERIAR_WRITE_CMD | type | mask | addr;
++
+ 	BUG_ON((addr & 3) || (mask == 0));
+ 	RTL_W32(tp, ERIDR, val);
+-	RTL_W32(tp, ERIAR, ERIAR_WRITE_CMD | type | mask | addr);
++	r8168fp_adjust_ocp_cmd(tp, &cmd, type);
++	RTL_W32(tp, ERIAR, cmd);
+ 
+ 	rtl_udelay_loop_wait_low(tp, &rtl_eriar_cond, 100, 100);
+ }
+@@ -1067,7 +1077,10 @@ static void rtl_eri_write(struct rtl8169_private *tp, int addr, u32 mask,
+ 
+ static u32 _rtl_eri_read(struct rtl8169_private *tp, int addr, int type)
+ {
+-	RTL_W32(tp, ERIAR, ERIAR_READ_CMD | type | ERIAR_MASK_1111 | addr);
++	u32 cmd = ERIAR_READ_CMD | type | ERIAR_MASK_1111 | addr;
++
++	r8168fp_adjust_ocp_cmd(tp, &cmd, type);
++	RTL_W32(tp, ERIAR, cmd);
+ 
+ 	return rtl_udelay_loop_wait_high(tp, &rtl_eriar_cond, 100, 100) ?
+ 		RTL_R32(tp, ERIDR) : ~0;
+diff --git a/drivers/net/ethernet/sgi/ioc3-eth.c b/drivers/net/ethernet/sgi/ioc3-eth.c
+index db6b2988e632..f4895777f5e3 100644
+--- a/drivers/net/ethernet/sgi/ioc3-eth.c
++++ b/drivers/net/ethernet/sgi/ioc3-eth.c
+@@ -865,14 +865,14 @@ static int ioc3eth_probe(struct platform_device *pdev)
+ 	ip = netdev_priv(dev);
+ 	ip->dma_dev = pdev->dev.parent;
+ 	ip->regs = devm_platform_ioremap_resource(pdev, 0);
+-	if (!ip->regs) {
+-		err = -ENOMEM;
++	if (IS_ERR(ip->regs)) {
++		err = PTR_ERR(ip->regs);
+ 		goto out_free;
+ 	}
+ 
+ 	ip->ssram = devm_platform_ioremap_resource(pdev, 1);
+-	if (!ip->ssram) {
+-		err = -ENOMEM;
++	if (IS_ERR(ip->ssram)) {
++		err = PTR_ERR(ip->ssram);
+ 		goto out_free;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/sun/cassini.c b/drivers/net/ethernet/sun/cassini.c
+index 6ec9163e232c..b716f188188e 100644
+--- a/drivers/net/ethernet/sun/cassini.c
++++ b/drivers/net/ethernet/sun/cassini.c
+@@ -4971,7 +4971,7 @@ static int cas_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
+ 					  cas_cacheline_size)) {
+ 			dev_err(&pdev->dev, "Could not set PCI cache "
+ 			       "line size\n");
+-			goto err_write_cacheline;
++			goto err_out_free_res;
+ 		}
+ 	}
+ #endif
+@@ -5144,7 +5144,6 @@ err_out_iounmap:
+ err_out_free_res:
+ 	pci_release_regions(pdev);
+ 
+-err_write_cacheline:
+ 	/* Try to restore it in case the error occurred after we
+ 	 * set it.
+ 	 */
+diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
+index 6ae4a72e6f43..5577ff0b7663 100644
+--- a/drivers/net/ethernet/ti/cpsw.c
++++ b/drivers/net/ethernet/ti/cpsw.c
+@@ -1752,11 +1752,15 @@ static int cpsw_suspend(struct device *dev)
+ 	struct cpsw_common *cpsw = dev_get_drvdata(dev);
+ 	int i;
+ 
++	rtnl_lock();
++
+ 	for (i = 0; i < cpsw->data.slaves; i++)
+ 		if (cpsw->slaves[i].ndev)
+ 			if (netif_running(cpsw->slaves[i].ndev))
+ 				cpsw_ndo_stop(cpsw->slaves[i].ndev);
+ 
++	rtnl_unlock();
++
+ 	/* Select sleep pin state */
+ 	pinctrl_pm_select_sleep_state(dev);
+ 
+diff --git a/drivers/net/hamradio/bpqether.c b/drivers/net/hamradio/bpqether.c
+index fbea6f232819..e2ad3c2e8df5 100644
+--- a/drivers/net/hamradio/bpqether.c
++++ b/drivers/net/hamradio/bpqether.c
+@@ -127,7 +127,8 @@ static inline struct net_device *bpq_get_ax25_dev(struct net_device *dev)
+ {
+ 	struct bpqdev *bpq;
+ 
+-	list_for_each_entry_rcu(bpq, &bpq_devices, bpq_list) {
++	list_for_each_entry_rcu(bpq, &bpq_devices, bpq_list,
++				lockdep_rtnl_is_held()) {
+ 		if (bpq->ethdev == dev)
+ 			return bpq->axdev;
+ 	}
+diff --git a/drivers/net/usb/cdc_ether.c b/drivers/net/usb/cdc_ether.c
+index 0cdb2ce47645..a657943c9f01 100644
+--- a/drivers/net/usb/cdc_ether.c
++++ b/drivers/net/usb/cdc_ether.c
+@@ -815,14 +815,21 @@ static const struct usb_device_id	products[] = {
+ 	.driver_info = 0,
+ },
+ 
+-/* Microsoft Surface 3 dock (based on Realtek RTL8153) */
++/* Microsoft Surface Ethernet Adapter (based on Realtek RTL8153) */
+ {
+ 	USB_DEVICE_AND_INTERFACE_INFO(MICROSOFT_VENDOR_ID, 0x07c6, USB_CLASS_COMM,
+ 			USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE),
+ 	.driver_info = 0,
+ },
+ 
+-	/* TP-LINK UE300 USB 3.0 Ethernet Adapters (based on Realtek RTL8153) */
++/* Microsoft Surface Ethernet Adapter (based on Realtek RTL8153B) */
++{
++	USB_DEVICE_AND_INTERFACE_INFO(MICROSOFT_VENDOR_ID, 0x0927, USB_CLASS_COMM,
++			USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE),
++	.driver_info = 0,
++},
++
++/* TP-LINK UE300 USB 3.0 Ethernet Adapters (based on Realtek RTL8153) */
+ {
+ 	USB_DEVICE_AND_INTERFACE_INFO(TPLINK_VENDOR_ID, 0x0601, USB_CLASS_COMM,
+ 			USB_CDC_SUBCLASS_ETHERNET, USB_CDC_PROTO_NONE),
+diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c
+index 95b19ce96513..7c8c45984a5c 100644
+--- a/drivers/net/usb/r8152.c
++++ b/drivers/net/usb/r8152.c
+@@ -6901,6 +6901,7 @@ static const struct usb_device_id rtl8152_table[] = {
+ 	{REALTEK_USB_DEVICE(VENDOR_ID_REALTEK, 0x8153)},
+ 	{REALTEK_USB_DEVICE(VENDOR_ID_MICROSOFT, 0x07ab)},
+ 	{REALTEK_USB_DEVICE(VENDOR_ID_MICROSOFT, 0x07c6)},
++	{REALTEK_USB_DEVICE(VENDOR_ID_MICROSOFT, 0x0927)},
+ 	{REALTEK_USB_DEVICE(VENDOR_ID_SAMSUNG, 0xa101)},
+ 	{REALTEK_USB_DEVICE(VENDOR_ID_LENOVO,  0x304f)},
+ 	{REALTEK_USB_DEVICE(VENDOR_ID_LENOVO,  0x3062)},
+diff --git a/drivers/net/wireguard/messages.h b/drivers/net/wireguard/messages.h
+index b8a7b9ce32ba..208da72673fc 100644
+--- a/drivers/net/wireguard/messages.h
++++ b/drivers/net/wireguard/messages.h
+@@ -32,7 +32,7 @@ enum cookie_values {
+ };
+ 
+ enum counter_values {
+-	COUNTER_BITS_TOTAL = 2048,
++	COUNTER_BITS_TOTAL = 8192,
+ 	COUNTER_REDUNDANT_BITS = BITS_PER_LONG,
+ 	COUNTER_WINDOW_SIZE = COUNTER_BITS_TOTAL - COUNTER_REDUNDANT_BITS
+ };
+diff --git a/drivers/net/wireguard/noise.c b/drivers/net/wireguard/noise.c
+index 708dc61c974f..626433690abb 100644
+--- a/drivers/net/wireguard/noise.c
++++ b/drivers/net/wireguard/noise.c
+@@ -104,6 +104,7 @@ static struct noise_keypair *keypair_create(struct wg_peer *peer)
+ 
+ 	if (unlikely(!keypair))
+ 		return NULL;
++	spin_lock_init(&keypair->receiving_counter.lock);
+ 	keypair->internal_id = atomic64_inc_return(&keypair_counter);
+ 	keypair->entry.type = INDEX_HASHTABLE_KEYPAIR;
+ 	keypair->entry.peer = peer;
+@@ -358,25 +359,16 @@ out:
+ 	memzero_explicit(output, BLAKE2S_HASH_SIZE + 1);
+ }
+ 
+-static void symmetric_key_init(struct noise_symmetric_key *key)
+-{
+-	spin_lock_init(&key->counter.receive.lock);
+-	atomic64_set(&key->counter.counter, 0);
+-	memset(key->counter.receive.backtrack, 0,
+-	       sizeof(key->counter.receive.backtrack));
+-	key->birthdate = ktime_get_coarse_boottime_ns();
+-	key->is_valid = true;
+-}
+-
+ static void derive_keys(struct noise_symmetric_key *first_dst,
+ 			struct noise_symmetric_key *second_dst,
+ 			const u8 chaining_key[NOISE_HASH_LEN])
+ {
++	u64 birthdate = ktime_get_coarse_boottime_ns();
+ 	kdf(first_dst->key, second_dst->key, NULL, NULL,
+ 	    NOISE_SYMMETRIC_KEY_LEN, NOISE_SYMMETRIC_KEY_LEN, 0, 0,
+ 	    chaining_key);
+-	symmetric_key_init(first_dst);
+-	symmetric_key_init(second_dst);
++	first_dst->birthdate = second_dst->birthdate = birthdate;
++	first_dst->is_valid = second_dst->is_valid = true;
+ }
+ 
+ static bool __must_check mix_dh(u8 chaining_key[NOISE_HASH_LEN],
+@@ -715,6 +707,7 @@ wg_noise_handshake_consume_response(struct message_handshake_response *src,
+ 	u8 e[NOISE_PUBLIC_KEY_LEN];
+ 	u8 ephemeral_private[NOISE_PUBLIC_KEY_LEN];
+ 	u8 static_private[NOISE_PUBLIC_KEY_LEN];
++	u8 preshared_key[NOISE_SYMMETRIC_KEY_LEN];
+ 
+ 	down_read(&wg->static_identity.lock);
+ 
+@@ -733,6 +726,8 @@ wg_noise_handshake_consume_response(struct message_handshake_response *src,
+ 	memcpy(chaining_key, handshake->chaining_key, NOISE_HASH_LEN);
+ 	memcpy(ephemeral_private, handshake->ephemeral_private,
+ 	       NOISE_PUBLIC_KEY_LEN);
++	memcpy(preshared_key, handshake->preshared_key,
++	       NOISE_SYMMETRIC_KEY_LEN);
+ 	up_read(&handshake->lock);
+ 
+ 	if (state != HANDSHAKE_CREATED_INITIATION)
+@@ -750,7 +745,7 @@ wg_noise_handshake_consume_response(struct message_handshake_response *src,
+ 		goto fail;
+ 
+ 	/* psk */
+-	mix_psk(chaining_key, hash, key, handshake->preshared_key);
++	mix_psk(chaining_key, hash, key, preshared_key);
+ 
+ 	/* {} */
+ 	if (!message_decrypt(NULL, src->encrypted_nothing,
+@@ -783,6 +778,7 @@ out:
+ 	memzero_explicit(chaining_key, NOISE_HASH_LEN);
+ 	memzero_explicit(ephemeral_private, NOISE_PUBLIC_KEY_LEN);
+ 	memzero_explicit(static_private, NOISE_PUBLIC_KEY_LEN);
++	memzero_explicit(preshared_key, NOISE_SYMMETRIC_KEY_LEN);
+ 	up_read(&wg->static_identity.lock);
+ 	return ret_peer;
+ }
+diff --git a/drivers/net/wireguard/noise.h b/drivers/net/wireguard/noise.h
+index f532d59d3f19..c527253dba80 100644
+--- a/drivers/net/wireguard/noise.h
++++ b/drivers/net/wireguard/noise.h
+@@ -15,18 +15,14 @@
+ #include <linux/mutex.h>
+ #include <linux/kref.h>
+ 
+-union noise_counter {
+-	struct {
+-		u64 counter;
+-		unsigned long backtrack[COUNTER_BITS_TOTAL / BITS_PER_LONG];
+-		spinlock_t lock;
+-	} receive;
+-	atomic64_t counter;
++struct noise_replay_counter {
++	u64 counter;
++	spinlock_t lock;
++	unsigned long backtrack[COUNTER_BITS_TOTAL / BITS_PER_LONG];
+ };
+ 
+ struct noise_symmetric_key {
+ 	u8 key[NOISE_SYMMETRIC_KEY_LEN];
+-	union noise_counter counter;
+ 	u64 birthdate;
+ 	bool is_valid;
+ };
+@@ -34,7 +30,9 @@ struct noise_symmetric_key {
+ struct noise_keypair {
+ 	struct index_hashtable_entry entry;
+ 	struct noise_symmetric_key sending;
++	atomic64_t sending_counter;
+ 	struct noise_symmetric_key receiving;
++	struct noise_replay_counter receiving_counter;
+ 	__le32 remote_index;
+ 	bool i_am_the_initiator;
+ 	struct kref refcount;
+diff --git a/drivers/net/wireguard/queueing.h b/drivers/net/wireguard/queueing.h
+index 3432232afe06..c58df439dbbe 100644
+--- a/drivers/net/wireguard/queueing.h
++++ b/drivers/net/wireguard/queueing.h
+@@ -87,12 +87,20 @@ static inline bool wg_check_packet_protocol(struct sk_buff *skb)
+ 	return real_protocol && skb->protocol == real_protocol;
+ }
+ 
+-static inline void wg_reset_packet(struct sk_buff *skb)
++static inline void wg_reset_packet(struct sk_buff *skb, bool encapsulating)
+ {
++	u8 l4_hash = skb->l4_hash;
++	u8 sw_hash = skb->sw_hash;
++	u32 hash = skb->hash;
+ 	skb_scrub_packet(skb, true);
+ 	memset(&skb->headers_start, 0,
+ 	       offsetof(struct sk_buff, headers_end) -
+ 		       offsetof(struct sk_buff, headers_start));
++	if (encapsulating) {
++		skb->l4_hash = l4_hash;
++		skb->sw_hash = sw_hash;
++		skb->hash = hash;
++	}
+ 	skb->queue_mapping = 0;
+ 	skb->nohdr = 0;
+ 	skb->peeked = 0;
+diff --git a/drivers/net/wireguard/receive.c b/drivers/net/wireguard/receive.c
+index 2566e13a292d..474bb69f0e1b 100644
+--- a/drivers/net/wireguard/receive.c
++++ b/drivers/net/wireguard/receive.c
+@@ -246,20 +246,20 @@ static void keep_key_fresh(struct wg_peer *peer)
+ 	}
+ }
+ 
+-static bool decrypt_packet(struct sk_buff *skb, struct noise_symmetric_key *key)
++static bool decrypt_packet(struct sk_buff *skb, struct noise_keypair *keypair)
+ {
+ 	struct scatterlist sg[MAX_SKB_FRAGS + 8];
+ 	struct sk_buff *trailer;
+ 	unsigned int offset;
+ 	int num_frags;
+ 
+-	if (unlikely(!key))
++	if (unlikely(!keypair))
+ 		return false;
+ 
+-	if (unlikely(!READ_ONCE(key->is_valid) ||
+-		  wg_birthdate_has_expired(key->birthdate, REJECT_AFTER_TIME) ||
+-		  key->counter.receive.counter >= REJECT_AFTER_MESSAGES)) {
+-		WRITE_ONCE(key->is_valid, false);
++	if (unlikely(!READ_ONCE(keypair->receiving.is_valid) ||
++		  wg_birthdate_has_expired(keypair->receiving.birthdate, REJECT_AFTER_TIME) ||
++		  keypair->receiving_counter.counter >= REJECT_AFTER_MESSAGES)) {
++		WRITE_ONCE(keypair->receiving.is_valid, false);
+ 		return false;
+ 	}
+ 
+@@ -284,7 +284,7 @@ static bool decrypt_packet(struct sk_buff *skb, struct noise_symmetric_key *key)
+ 
+ 	if (!chacha20poly1305_decrypt_sg_inplace(sg, skb->len, NULL, 0,
+ 					         PACKET_CB(skb)->nonce,
+-						 key->key))
++						 keypair->receiving.key))
+ 		return false;
+ 
+ 	/* Another ugly situation of pushing and pulling the header so as to
+@@ -299,41 +299,41 @@ static bool decrypt_packet(struct sk_buff *skb, struct noise_symmetric_key *key)
+ }
+ 
+ /* This is RFC6479, a replay detection bitmap algorithm that avoids bitshifts */
+-static bool counter_validate(union noise_counter *counter, u64 their_counter)
++static bool counter_validate(struct noise_replay_counter *counter, u64 their_counter)
+ {
+ 	unsigned long index, index_current, top, i;
+ 	bool ret = false;
+ 
+-	spin_lock_bh(&counter->receive.lock);
++	spin_lock_bh(&counter->lock);
+ 
+-	if (unlikely(counter->receive.counter >= REJECT_AFTER_MESSAGES + 1 ||
++	if (unlikely(counter->counter >= REJECT_AFTER_MESSAGES + 1 ||
+ 		     their_counter >= REJECT_AFTER_MESSAGES))
+ 		goto out;
+ 
+ 	++their_counter;
+ 
+ 	if (unlikely((COUNTER_WINDOW_SIZE + their_counter) <
+-		     counter->receive.counter))
++		     counter->counter))
+ 		goto out;
+ 
+ 	index = their_counter >> ilog2(BITS_PER_LONG);
+ 
+-	if (likely(their_counter > counter->receive.counter)) {
+-		index_current = counter->receive.counter >> ilog2(BITS_PER_LONG);
++	if (likely(their_counter > counter->counter)) {
++		index_current = counter->counter >> ilog2(BITS_PER_LONG);
+ 		top = min_t(unsigned long, index - index_current,
+ 			    COUNTER_BITS_TOTAL / BITS_PER_LONG);
+ 		for (i = 1; i <= top; ++i)
+-			counter->receive.backtrack[(i + index_current) &
++			counter->backtrack[(i + index_current) &
+ 				((COUNTER_BITS_TOTAL / BITS_PER_LONG) - 1)] = 0;
+-		counter->receive.counter = their_counter;
++		counter->counter = their_counter;
+ 	}
+ 
+ 	index &= (COUNTER_BITS_TOTAL / BITS_PER_LONG) - 1;
+ 	ret = !test_and_set_bit(their_counter & (BITS_PER_LONG - 1),
+-				&counter->receive.backtrack[index]);
++				&counter->backtrack[index]);
+ 
+ out:
+-	spin_unlock_bh(&counter->receive.lock);
++	spin_unlock_bh(&counter->lock);
+ 	return ret;
+ }
+ 
+@@ -473,19 +473,19 @@ int wg_packet_rx_poll(struct napi_struct *napi, int budget)
+ 		if (unlikely(state != PACKET_STATE_CRYPTED))
+ 			goto next;
+ 
+-		if (unlikely(!counter_validate(&keypair->receiving.counter,
++		if (unlikely(!counter_validate(&keypair->receiving_counter,
+ 					       PACKET_CB(skb)->nonce))) {
+ 			net_dbg_ratelimited("%s: Packet has invalid nonce %llu (max %llu)\n",
+ 					    peer->device->dev->name,
+ 					    PACKET_CB(skb)->nonce,
+-					    keypair->receiving.counter.receive.counter);
++					    keypair->receiving_counter.counter);
+ 			goto next;
+ 		}
+ 
+ 		if (unlikely(wg_socket_endpoint_from_skb(&endpoint, skb)))
+ 			goto next;
+ 
+-		wg_reset_packet(skb);
++		wg_reset_packet(skb, false);
+ 		wg_packet_consume_data_done(peer, skb, &endpoint);
+ 		free = false;
+ 
+@@ -512,8 +512,8 @@ void wg_packet_decrypt_worker(struct work_struct *work)
+ 	struct sk_buff *skb;
+ 
+ 	while ((skb = ptr_ring_consume_bh(&queue->ring)) != NULL) {
+-		enum packet_state state = likely(decrypt_packet(skb,
+-				&PACKET_CB(skb)->keypair->receiving)) ?
++		enum packet_state state =
++			likely(decrypt_packet(skb, PACKET_CB(skb)->keypair)) ?
+ 				PACKET_STATE_CRYPTED : PACKET_STATE_DEAD;
+ 		wg_queue_enqueue_per_peer_napi(skb, state);
+ 		if (need_resched())
+diff --git a/drivers/net/wireguard/selftest/counter.c b/drivers/net/wireguard/selftest/counter.c
+index f4fbb9072ed7..ec3c156bf91b 100644
+--- a/drivers/net/wireguard/selftest/counter.c
++++ b/drivers/net/wireguard/selftest/counter.c
+@@ -6,18 +6,24 @@
+ #ifdef DEBUG
+ bool __init wg_packet_counter_selftest(void)
+ {
++	struct noise_replay_counter *counter;
+ 	unsigned int test_num = 0, i;
+-	union noise_counter counter;
+ 	bool success = true;
+ 
+-#define T_INIT do {                                               \
+-		memset(&counter, 0, sizeof(union noise_counter)); \
+-		spin_lock_init(&counter.receive.lock);            \
++	counter = kmalloc(sizeof(*counter), GFP_KERNEL);
++	if (unlikely(!counter)) {
++		pr_err("nonce counter self-test malloc: FAIL\n");
++		return false;
++	}
++
++#define T_INIT do {                                    \
++		memset(counter, 0, sizeof(*counter));  \
++		spin_lock_init(&counter->lock);        \
+ 	} while (0)
+ #define T_LIM (COUNTER_WINDOW_SIZE + 1)
+ #define T(n, v) do {                                                  \
+ 		++test_num;                                           \
+-		if (counter_validate(&counter, n) != (v)) {           \
++		if (counter_validate(counter, n) != (v)) {            \
+ 			pr_err("nonce counter self-test %u: FAIL\n",  \
+ 			       test_num);                             \
+ 			success = false;                              \
+@@ -99,6 +105,7 @@ bool __init wg_packet_counter_selftest(void)
+ 
+ 	if (success)
+ 		pr_info("nonce counter self-tests: pass\n");
++	kfree(counter);
+ 	return success;
+ }
+ #endif
+diff --git a/drivers/net/wireguard/send.c b/drivers/net/wireguard/send.c
+index e8a7d0a0cb88..485d5d7a217b 100644
+--- a/drivers/net/wireguard/send.c
++++ b/drivers/net/wireguard/send.c
+@@ -129,7 +129,7 @@ static void keep_key_fresh(struct wg_peer *peer)
+ 	rcu_read_lock_bh();
+ 	keypair = rcu_dereference_bh(peer->keypairs.current_keypair);
+ 	if (likely(keypair && READ_ONCE(keypair->sending.is_valid)) &&
+-	    (unlikely(atomic64_read(&keypair->sending.counter.counter) >
++	    (unlikely(atomic64_read(&keypair->sending_counter) >
+ 		      REKEY_AFTER_MESSAGES) ||
+ 	     (keypair->i_am_the_initiator &&
+ 	      unlikely(wg_birthdate_has_expired(keypair->sending.birthdate,
+@@ -170,6 +170,11 @@ static bool encrypt_packet(struct sk_buff *skb, struct noise_keypair *keypair)
+ 	struct sk_buff *trailer;
+ 	int num_frags;
+ 
++	/* Force hash calculation before encryption so that flow analysis is
++	 * consistent over the inner packet.
++	 */
++	skb_get_hash(skb);
++
+ 	/* Calculate lengths. */
+ 	padding_len = calculate_skb_padding(skb);
+ 	trailer_len = padding_len + noise_encrypted_len(0);
+@@ -298,7 +303,7 @@ void wg_packet_encrypt_worker(struct work_struct *work)
+ 		skb_list_walk_safe(first, skb, next) {
+ 			if (likely(encrypt_packet(skb,
+ 					PACKET_CB(first)->keypair))) {
+-				wg_reset_packet(skb);
++				wg_reset_packet(skb, true);
+ 			} else {
+ 				state = PACKET_STATE_DEAD;
+ 				break;
+@@ -348,7 +353,6 @@ void wg_packet_purge_staged_packets(struct wg_peer *peer)
+ 
+ void wg_packet_send_staged_packets(struct wg_peer *peer)
+ {
+-	struct noise_symmetric_key *key;
+ 	struct noise_keypair *keypair;
+ 	struct sk_buff_head packets;
+ 	struct sk_buff *skb;
+@@ -368,10 +372,9 @@ void wg_packet_send_staged_packets(struct wg_peer *peer)
+ 	rcu_read_unlock_bh();
+ 	if (unlikely(!keypair))
+ 		goto out_nokey;
+-	key = &keypair->sending;
+-	if (unlikely(!READ_ONCE(key->is_valid)))
++	if (unlikely(!READ_ONCE(keypair->sending.is_valid)))
+ 		goto out_nokey;
+-	if (unlikely(wg_birthdate_has_expired(key->birthdate,
++	if (unlikely(wg_birthdate_has_expired(keypair->sending.birthdate,
+ 					      REJECT_AFTER_TIME)))
+ 		goto out_invalid;
+ 
+@@ -386,7 +389,7 @@ void wg_packet_send_staged_packets(struct wg_peer *peer)
+ 		 */
+ 		PACKET_CB(skb)->ds = ip_tunnel_ecn_encap(0, ip_hdr(skb), skb);
+ 		PACKET_CB(skb)->nonce =
+-				atomic64_inc_return(&key->counter.counter) - 1;
++				atomic64_inc_return(&keypair->sending_counter) - 1;
+ 		if (unlikely(PACKET_CB(skb)->nonce >= REJECT_AFTER_MESSAGES))
+ 			goto out_invalid;
+ 	}
+@@ -398,7 +401,7 @@ void wg_packet_send_staged_packets(struct wg_peer *peer)
+ 	return;
+ 
+ out_invalid:
+-	WRITE_ONCE(key->is_valid, false);
++	WRITE_ONCE(keypair->sending.is_valid, false);
+ out_nokey:
+ 	wg_noise_keypair_put(keypair, false);
+ 
+diff --git a/drivers/soc/mediatek/mtk-cmdq-helper.c b/drivers/soc/mediatek/mtk-cmdq-helper.c
+index db37144ae98c..87ee9f767b7a 100644
+--- a/drivers/soc/mediatek/mtk-cmdq-helper.c
++++ b/drivers/soc/mediatek/mtk-cmdq-helper.c
+@@ -351,7 +351,9 @@ int cmdq_pkt_flush_async(struct cmdq_pkt *pkt, cmdq_async_flush_cb cb,
+ 		spin_unlock_irqrestore(&client->lock, flags);
+ 	}
+ 
+-	mbox_send_message(client->chan, pkt);
++	err = mbox_send_message(client->chan, pkt);
++	if (err < 0)
++		return err;
+ 	/* We can send next packet immediately, so just call txdone. */
+ 	mbox_client_txdone(client->chan, 0);
+ 
+diff --git a/drivers/usb/dwc3/dwc3-pci.c b/drivers/usb/dwc3/dwc3-pci.c
+index 7051611229c9..b67372737dc9 100644
+--- a/drivers/usb/dwc3/dwc3-pci.c
++++ b/drivers/usb/dwc3/dwc3-pci.c
+@@ -114,6 +114,7 @@ static const struct property_entry dwc3_pci_intel_properties[] = {
+ 
+ static const struct property_entry dwc3_pci_mrfld_properties[] = {
+ 	PROPERTY_ENTRY_STRING("dr_mode", "otg"),
++	PROPERTY_ENTRY_STRING("linux,extcon-name", "mrfld_bcove_pwrsrc"),
+ 	PROPERTY_ENTRY_BOOL("linux,sysdev_is_parent"),
+ 	{}
+ };
+diff --git a/drivers/usb/gadget/legacy/inode.c b/drivers/usb/gadget/legacy/inode.c
+index b47938dff1a2..238f555fe494 100644
+--- a/drivers/usb/gadget/legacy/inode.c
++++ b/drivers/usb/gadget/legacy/inode.c
+@@ -1361,7 +1361,6 @@ gadgetfs_setup (struct usb_gadget *gadget, const struct usb_ctrlrequest *ctrl)
+ 
+ 	req->buf = dev->rbuf;
+ 	req->context = NULL;
+-	value = -EOPNOTSUPP;
+ 	switch (ctrl->bRequest) {
+ 
+ 	case USB_REQ_GET_DESCRIPTOR:
+@@ -1784,7 +1783,7 @@ static ssize_t
+ dev_config (struct file *fd, const char __user *buf, size_t len, loff_t *ptr)
+ {
+ 	struct dev_data		*dev = fd->private_data;
+-	ssize_t			value = len, length = len;
++	ssize_t			value, length = len;
+ 	unsigned		total;
+ 	u32			tag;
+ 	char			*kbuf;
+diff --git a/drivers/usb/phy/phy-twl6030-usb.c b/drivers/usb/phy/phy-twl6030-usb.c
+index bfebf1f2e991..9a7e655d5280 100644
+--- a/drivers/usb/phy/phy-twl6030-usb.c
++++ b/drivers/usb/phy/phy-twl6030-usb.c
+@@ -377,7 +377,7 @@ static int twl6030_usb_probe(struct platform_device *pdev)
+ 	if (status < 0) {
+ 		dev_err(&pdev->dev, "can't get IRQ %d, err %d\n",
+ 			twl->irq1, status);
+-		return status;
++		goto err_put_regulator;
+ 	}
+ 
+ 	status = request_threaded_irq(twl->irq2, NULL, twl6030_usb_irq,
+@@ -386,8 +386,7 @@ static int twl6030_usb_probe(struct platform_device *pdev)
+ 	if (status < 0) {
+ 		dev_err(&pdev->dev, "can't get IRQ %d, err %d\n",
+ 			twl->irq2, status);
+-		free_irq(twl->irq1, twl);
+-		return status;
++		goto err_free_irq1;
+ 	}
+ 
+ 	twl->asleep = 0;
+@@ -396,6 +395,13 @@ static int twl6030_usb_probe(struct platform_device *pdev)
+ 	dev_info(&pdev->dev, "Initialized TWL6030 USB module\n");
+ 
+ 	return 0;
++
++err_free_irq1:
++	free_irq(twl->irq1, twl);
++err_put_regulator:
++	regulator_put(twl->usb3v3);
++
++	return status;
+ }
+ 
+ static int twl6030_usb_remove(struct platform_device *pdev)
+diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
+index 341458fd95ca..44375a22307b 100644
+--- a/drivers/virtio/virtio_balloon.c
++++ b/drivers/virtio/virtio_balloon.c
+@@ -14,6 +14,7 @@
+ #include <linux/slab.h>
+ #include <linux/module.h>
+ #include <linux/balloon_compaction.h>
++#include <linux/oom.h>
+ #include <linux/wait.h>
+ #include <linux/mm.h>
+ #include <linux/mount.h>
+@@ -27,7 +28,9 @@
+  */
+ #define VIRTIO_BALLOON_PAGES_PER_PAGE (unsigned)(PAGE_SIZE >> VIRTIO_BALLOON_PFN_SHIFT)
+ #define VIRTIO_BALLOON_ARRAY_PFNS_MAX 256
+-#define VIRTBALLOON_OOM_NOTIFY_PRIORITY 80
++/* Maximum number of (4k) pages to deflate on OOM notifications. */
++#define VIRTIO_BALLOON_OOM_NR_PAGES 256
++#define VIRTIO_BALLOON_OOM_NOTIFY_PRIORITY 80
+ 
+ #define VIRTIO_BALLOON_FREE_PAGE_ALLOC_FLAG (__GFP_NORETRY | __GFP_NOWARN | \
+ 					     __GFP_NOMEMALLOC)
+@@ -112,8 +115,11 @@ struct virtio_balloon {
+ 	/* Memory statistics */
+ 	struct virtio_balloon_stat stats[VIRTIO_BALLOON_S_NR];
+ 
+-	/* To register a shrinker to shrink memory upon memory pressure */
++	/* Shrinker to return free pages - VIRTIO_BALLOON_F_FREE_PAGE_HINT */
+ 	struct shrinker shrinker;
++
++	/* OOM notifier to deflate on OOM - VIRTIO_BALLOON_F_DEFLATE_ON_OOM */
++	struct notifier_block oom_nb;
+ };
+ 
+ static struct virtio_device_id id_table[] = {
+@@ -788,50 +794,13 @@ static unsigned long shrink_free_pages(struct virtio_balloon *vb,
+ 	return blocks_freed * VIRTIO_BALLOON_HINT_BLOCK_PAGES;
+ }
+ 
+-static unsigned long leak_balloon_pages(struct virtio_balloon *vb,
+-                                          unsigned long pages_to_free)
+-{
+-	return leak_balloon(vb, pages_to_free * VIRTIO_BALLOON_PAGES_PER_PAGE) /
+-		VIRTIO_BALLOON_PAGES_PER_PAGE;
+-}
+-
+-static unsigned long shrink_balloon_pages(struct virtio_balloon *vb,
+-					  unsigned long pages_to_free)
+-{
+-	unsigned long pages_freed = 0;
+-
+-	/*
+-	 * One invocation of leak_balloon can deflate at most
+-	 * VIRTIO_BALLOON_ARRAY_PFNS_MAX balloon pages, so we call it
+-	 * multiple times to deflate pages till reaching pages_to_free.
+-	 */
+-	while (vb->num_pages && pages_freed < pages_to_free)
+-		pages_freed += leak_balloon_pages(vb,
+-						  pages_to_free - pages_freed);
+-
+-	update_balloon_size(vb);
+-
+-	return pages_freed;
+-}
+-
+ static unsigned long virtio_balloon_shrinker_scan(struct shrinker *shrinker,
+ 						  struct shrink_control *sc)
+ {
+-	unsigned long pages_to_free, pages_freed = 0;
+ 	struct virtio_balloon *vb = container_of(shrinker,
+ 					struct virtio_balloon, shrinker);
+ 
+-	pages_to_free = sc->nr_to_scan;
+-
+-	if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT))
+-		pages_freed = shrink_free_pages(vb, pages_to_free);
+-
+-	if (pages_freed >= pages_to_free)
+-		return pages_freed;
+-
+-	pages_freed += shrink_balloon_pages(vb, pages_to_free - pages_freed);
+-
+-	return pages_freed;
++	return shrink_free_pages(vb, sc->nr_to_scan);
+ }
+ 
+ static unsigned long virtio_balloon_shrinker_count(struct shrinker *shrinker,
+@@ -839,26 +808,22 @@ static unsigned long virtio_balloon_shrinker_count(struct shrinker *shrinker,
+ {
+ 	struct virtio_balloon *vb = container_of(shrinker,
+ 					struct virtio_balloon, shrinker);
+-	unsigned long count;
+-
+-	count = vb->num_pages / VIRTIO_BALLOON_PAGES_PER_PAGE;
+-	count += vb->num_free_page_blocks * VIRTIO_BALLOON_HINT_BLOCK_PAGES;
+ 
+-	return count;
++	return vb->num_free_page_blocks * VIRTIO_BALLOON_HINT_BLOCK_PAGES;
+ }
+ 
+-static void virtio_balloon_unregister_shrinker(struct virtio_balloon *vb)
++static int virtio_balloon_oom_notify(struct notifier_block *nb,
++				     unsigned long dummy, void *parm)
+ {
+-	unregister_shrinker(&vb->shrinker);
+-}
++	struct virtio_balloon *vb = container_of(nb,
++						 struct virtio_balloon, oom_nb);
++	unsigned long *freed = parm;
+ 
+-static int virtio_balloon_register_shrinker(struct virtio_balloon *vb)
+-{
+-	vb->shrinker.scan_objects = virtio_balloon_shrinker_scan;
+-	vb->shrinker.count_objects = virtio_balloon_shrinker_count;
+-	vb->shrinker.seeks = DEFAULT_SEEKS;
++	*freed += leak_balloon(vb, VIRTIO_BALLOON_OOM_NR_PAGES) /
++		  VIRTIO_BALLOON_PAGES_PER_PAGE;
++	update_balloon_size(vb);
+ 
+-	return register_shrinker(&vb->shrinker);
++	return NOTIFY_OK;
+ }
+ 
+ static int virtballoon_probe(struct virtio_device *vdev)
+@@ -935,22 +900,35 @@ static int virtballoon_probe(struct virtio_device *vdev)
+ 			virtio_cwrite(vb->vdev, struct virtio_balloon_config,
+ 				      poison_val, &poison_val);
+ 		}
+-	}
+-	/*
+-	 * We continue to use VIRTIO_BALLOON_F_DEFLATE_ON_OOM to decide if a
+-	 * shrinker needs to be registered to relieve memory pressure.
+-	 */
+-	if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_DEFLATE_ON_OOM)) {
+-		err = virtio_balloon_register_shrinker(vb);
++
++		/*
++		 * We're allowed to reuse any free pages, even if they are
++		 * still to be processed by the host.
++		 */
++		vb->shrinker.scan_objects = virtio_balloon_shrinker_scan;
++		vb->shrinker.count_objects = virtio_balloon_shrinker_count;
++		vb->shrinker.seeks = DEFAULT_SEEKS;
++		err = register_shrinker(&vb->shrinker);
+ 		if (err)
+ 			goto out_del_balloon_wq;
+ 	}
++	if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_DEFLATE_ON_OOM)) {
++		vb->oom_nb.notifier_call = virtio_balloon_oom_notify;
++		vb->oom_nb.priority = VIRTIO_BALLOON_OOM_NOTIFY_PRIORITY;
++		err = register_oom_notifier(&vb->oom_nb);
++		if (err < 0)
++			goto out_unregister_shrinker;
++	}
++
+ 	virtio_device_ready(vdev);
+ 
+ 	if (towards_target(vb))
+ 		virtballoon_changed(vdev);
+ 	return 0;
+ 
++out_unregister_shrinker:
++	if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT))
++		unregister_shrinker(&vb->shrinker);
+ out_del_balloon_wq:
+ 	if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT))
+ 		destroy_workqueue(vb->balloon_wq);
+@@ -989,8 +967,11 @@ static void virtballoon_remove(struct virtio_device *vdev)
+ {
+ 	struct virtio_balloon *vb = vdev->priv;
+ 
+-	if (virtio_has_feature(vb->vdev, VIRTIO_BALLOON_F_DEFLATE_ON_OOM))
+-		virtio_balloon_unregister_shrinker(vb);
++	if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_DEFLATE_ON_OOM))
++		unregister_oom_notifier(&vb->oom_nb);
++	if (virtio_has_feature(vdev, VIRTIO_BALLOON_F_FREE_PAGE_HINT))
++		unregister_shrinker(&vb->shrinker);
++
+ 	spin_lock_irq(&vb->stop_update_lock);
+ 	vb->stop_update = true;
+ 	spin_unlock_irq(&vb->stop_update_lock);
+diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c
+index f4713ea76e82..54f888ddb8cc 100644
+--- a/fs/binfmt_elf.c
++++ b/fs/binfmt_elf.c
+@@ -1733,7 +1733,7 @@ static int fill_thread_core_info(struct elf_thread_core_info *t,
+ 		    (!regset->active || regset->active(t->task, regset) > 0)) {
+ 			int ret;
+ 			size_t size = regset_size(t->task, regset);
+-			void *data = kmalloc(size, GFP_KERNEL);
++			void *data = kzalloc(size, GFP_KERNEL);
+ 			if (unlikely(!data))
+ 				return 0;
+ 			ret = regset->get(t->task, regset,
+diff --git a/fs/cachefiles/rdwr.c b/fs/cachefiles/rdwr.c
+index d3d78176b23c..e7726f5f1241 100644
+--- a/fs/cachefiles/rdwr.c
++++ b/fs/cachefiles/rdwr.c
+@@ -60,9 +60,9 @@ static int cachefiles_read_waiter(wait_queue_entry_t *wait, unsigned mode,
+ 	object = container_of(op->op.object, struct cachefiles_object, fscache);
+ 	spin_lock(&object->work_lock);
+ 	list_add_tail(&monitor->op_link, &op->to_do);
++	fscache_enqueue_retrieval(op);
+ 	spin_unlock(&object->work_lock);
+ 
+-	fscache_enqueue_retrieval(op);
+ 	fscache_put_retrieval(op);
+ 	return 0;
+ }
+diff --git a/fs/ceph/caps.c b/fs/ceph/caps.c
+index f50204380a65..3ae88ca03ccd 100644
+--- a/fs/ceph/caps.c
++++ b/fs/ceph/caps.c
+@@ -3952,7 +3952,7 @@ void ceph_handle_caps(struct ceph_mds_session *session,
+ 			__ceph_queue_cap_release(session, cap);
+ 			spin_unlock(&session->s_cap_lock);
+ 		}
+-		goto done;
++		goto flush_cap_releases;
+ 	}
+ 
+ 	/* these will work even if we don't have a cap yet */
+diff --git a/fs/cifs/file.c b/fs/cifs/file.c
+index 5920820bfbd0..b30b03747dd6 100644
+--- a/fs/cifs/file.c
++++ b/fs/cifs/file.c
+@@ -4060,7 +4060,7 @@ cifs_read(struct file *file, char *read_data, size_t read_size, loff_t *offset)
+ 			 * than it negotiated since it will refuse the read
+ 			 * then.
+ 			 */
+-			if ((tcon->ses) && !(tcon->ses->capabilities &
++			if (!(tcon->ses->capabilities &
+ 				tcon->ses->server->vals->cap_large_files)) {
+ 				current_read_size = min_t(uint,
+ 					current_read_size, CIFSMaxBufSize);
+diff --git a/fs/gfs2/log.c b/fs/gfs2/log.c
+index 60d911e293e6..2674feda1d7a 100644
+--- a/fs/gfs2/log.c
++++ b/fs/gfs2/log.c
+@@ -603,13 +603,13 @@ void gfs2_add_revoke(struct gfs2_sbd *sdp, struct gfs2_bufdata *bd)
+ 	struct buffer_head *bh = bd->bd_bh;
+ 	struct gfs2_glock *gl = bd->bd_gl;
+ 
++	sdp->sd_log_num_revoke++;
++	if (atomic_inc_return(&gl->gl_revokes) == 1)
++		gfs2_glock_hold(gl);
+ 	bh->b_private = NULL;
+ 	bd->bd_blkno = bh->b_blocknr;
+ 	gfs2_remove_from_ail(bd); /* drops ref on bh */
+ 	bd->bd_bh = NULL;
+-	sdp->sd_log_num_revoke++;
+-	if (atomic_inc_return(&gl->gl_revokes) == 1)
+-		gfs2_glock_hold(gl);
+ 	set_bit(GLF_LFLUSH, &gl->gl_flags);
+ 	list_add(&bd->bd_list, &sdp->sd_log_revokes);
+ }
+diff --git a/fs/gfs2/quota.c b/fs/gfs2/quota.c
+index e9f93045eb01..832d44782f74 100644
+--- a/fs/gfs2/quota.c
++++ b/fs/gfs2/quota.c
+@@ -1040,8 +1040,7 @@ int gfs2_quota_lock(struct gfs2_inode *ip, kuid_t uid, kgid_t gid)
+ 	u32 x;
+ 	int error = 0;
+ 
+-	if (capable(CAP_SYS_RESOURCE) ||
+-	    sdp->sd_args.ar_quota != GFS2_QUOTA_ON)
++	if (sdp->sd_args.ar_quota != GFS2_QUOTA_ON)
+ 		return 0;
+ 
+ 	error = gfs2_quota_hold(ip, uid, gid);
+diff --git a/fs/gfs2/quota.h b/fs/gfs2/quota.h
+index 765627d9a91e..fe68a91dc16f 100644
+--- a/fs/gfs2/quota.h
++++ b/fs/gfs2/quota.h
+@@ -44,7 +44,8 @@ static inline int gfs2_quota_lock_check(struct gfs2_inode *ip,
+ 	int ret;
+ 
+ 	ap->allowed = UINT_MAX; /* Assume we are permitted a whole lot */
+-	if (sdp->sd_args.ar_quota == GFS2_QUOTA_OFF)
++	if (capable(CAP_SYS_RESOURCE) ||
++	    sdp->sd_args.ar_quota == GFS2_QUOTA_OFF)
+ 		return 0;
+ 	ret = gfs2_quota_lock(ip, NO_UID_QUOTA_CHANGE, NO_GID_QUOTA_CHANGE);
+ 	if (ret)
+diff --git a/include/asm-generic/topology.h b/include/asm-generic/topology.h
+index 238873739550..5aa8705df87e 100644
+--- a/include/asm-generic/topology.h
++++ b/include/asm-generic/topology.h
+@@ -48,7 +48,7 @@
+   #ifdef CONFIG_NEED_MULTIPLE_NODES
+     #define cpumask_of_node(node)	((node) == 0 ? cpu_online_mask : cpu_none_mask)
+   #else
+-    #define cpumask_of_node(node)	((void)node, cpu_online_mask)
++    #define cpumask_of_node(node)	((void)(node), cpu_online_mask)
+   #endif
+ #endif
+ #ifndef pcibus_to_node
+diff --git a/include/linux/ieee80211.h b/include/linux/ieee80211.h
+index 7f3486e32e5d..624d2643bfba 100644
+--- a/include/linux/ieee80211.h
++++ b/include/linux/ieee80211.h
+@@ -2047,7 +2047,7 @@ ieee80211_he_ppe_size(u8 ppe_thres_hdr, const u8 *phy_cap_info)
+ }
+ 
+ /* HE Operation defines */
+-#define IEEE80211_HE_OPERATION_DFLT_PE_DURATION_MASK		0x00000003
++#define IEEE80211_HE_OPERATION_DFLT_PE_DURATION_MASK		0x00000007
+ #define IEEE80211_HE_OPERATION_TWT_REQUIRED			0x00000008
+ #define IEEE80211_HE_OPERATION_RTS_THRESHOLD_MASK		0x00003ff0
+ #define IEEE80211_HE_OPERATION_RTS_THRESHOLD_OFFSET		4
+diff --git a/include/linux/mlx5/driver.h b/include/linux/mlx5/driver.h
+index 277a51d3ec40..a1842ce8bd4e 100644
+--- a/include/linux/mlx5/driver.h
++++ b/include/linux/mlx5/driver.h
+@@ -230,6 +230,12 @@ struct mlx5_bfreg_info {
+ 	u32			num_dyn_bfregs;
+ };
+ 
++enum mlx5_cmdif_state {
++	MLX5_CMDIF_STATE_UNINITIALIZED,
++	MLX5_CMDIF_STATE_UP,
++	MLX5_CMDIF_STATE_DOWN,
++};
++
+ struct mlx5_cmd_first {
+ 	__be32		data[4];
+ };
+@@ -275,6 +281,7 @@ struct mlx5_cmd_stats {
+ struct mlx5_cmd {
+ 	struct mlx5_nb    nb;
+ 
++	enum mlx5_cmdif_state	state;
+ 	void	       *cmd_alloc_buf;
+ 	dma_addr_t	alloc_dma;
+ 	int		alloc_size;
+@@ -301,6 +308,7 @@ struct mlx5_cmd {
+ 	struct semaphore sem;
+ 	struct semaphore pages_sem;
+ 	int	mode;
++	u16     allowed_opcode;
+ 	struct mlx5_cmd_work_ent *ent_arr[MLX5_MAX_COMMANDS];
+ 	struct dma_pool *pool;
+ 	struct mlx5_cmd_debug dbg;
+@@ -761,6 +769,7 @@ struct mlx5_cmd_work_ent {
+ 	struct delayed_work	cb_timeout_work;
+ 	void		       *context;
+ 	int			idx;
++	struct completion	handling;
+ 	struct completion	done;
+ 	struct mlx5_cmd        *cmd;
+ 	struct work_struct	work;
+@@ -892,10 +901,17 @@ mlx5_frag_buf_get_idx_last_contig_stride(struct mlx5_frag_buf_ctrl *fbc, u32 ix)
+ 	return min_t(u32, last_frag_stride_idx - fbc->strides_offset, fbc->sz_m1);
+ }
+ 
++enum {
++	CMD_ALLOWED_OPCODE_ALL,
++};
++
+ int mlx5_cmd_init(struct mlx5_core_dev *dev);
+ void mlx5_cmd_cleanup(struct mlx5_core_dev *dev);
++void mlx5_cmd_set_state(struct mlx5_core_dev *dev,
++			enum mlx5_cmdif_state cmdif_state);
+ void mlx5_cmd_use_events(struct mlx5_core_dev *dev);
+ void mlx5_cmd_use_polling(struct mlx5_core_dev *dev);
++void mlx5_cmd_allowed_opcode(struct mlx5_core_dev *dev, u16 opcode);
+ 
+ struct mlx5_async_ctx {
+ 	struct mlx5_core_dev *dev;
+diff --git a/include/linux/mm.h b/include/linux/mm.h
+index c54fb96cb1e6..96deeecd9179 100644
+--- a/include/linux/mm.h
++++ b/include/linux/mm.h
+@@ -670,6 +670,11 @@ static inline void *kvcalloc(size_t n, size_t size, gfp_t flags)
+ 
+ extern void kvfree(const void *addr);
+ 
++/*
++ * Mapcount of compound page as a whole, does not include mapped sub-pages.
++ *
++ * Must be called only for compound pages or any their tail sub-pages.
++ */
+ static inline int compound_mapcount(struct page *page)
+ {
+ 	VM_BUG_ON_PAGE(!PageCompound(page), page);
+@@ -689,10 +694,16 @@ static inline void page_mapcount_reset(struct page *page)
+ 
+ int __page_mapcount(struct page *page);
+ 
++/*
++ * Mapcount of 0-order page; when compound sub-page, includes
++ * compound_mapcount().
++ *
++ * Result is undefined for pages which cannot be mapped into userspace.
++ * For example SLAB or special types of pages. See function page_has_type().
++ * They use this place in struct page differently.
++ */
+ static inline int page_mapcount(struct page *page)
+ {
+-	VM_BUG_ON_PAGE(PageSlab(page), page);
+-
+ 	if (unlikely(PageCompound(page)))
+ 		return __page_mapcount(page);
+ 	return atomic_read(&page->_mapcount) + 1;
+diff --git a/include/linux/netfilter/nf_conntrack_pptp.h b/include/linux/netfilter/nf_conntrack_pptp.h
+index fcc409de31a4..a28aa289afdc 100644
+--- a/include/linux/netfilter/nf_conntrack_pptp.h
++++ b/include/linux/netfilter/nf_conntrack_pptp.h
+@@ -10,7 +10,7 @@
+ #include <net/netfilter/nf_conntrack_expect.h>
+ #include <uapi/linux/netfilter/nf_conntrack_tuple_common.h>
+ 
+-extern const char *const pptp_msg_name[];
++const char *pptp_msg_name(u_int16_t msg);
+ 
+ /* state of the control session */
+ enum pptp_ctrlsess_state {
+diff --git a/include/net/act_api.h b/include/net/act_api.h
+index 71347a90a9d1..050c0246dee8 100644
+--- a/include/net/act_api.h
++++ b/include/net/act_api.h
+@@ -69,7 +69,8 @@ static inline void tcf_tm_dump(struct tcf_t *dtm, const struct tcf_t *stm)
+ {
+ 	dtm->install = jiffies_to_clock_t(jiffies - stm->install);
+ 	dtm->lastuse = jiffies_to_clock_t(jiffies - stm->lastuse);
+-	dtm->firstuse = jiffies_to_clock_t(jiffies - stm->firstuse);
++	dtm->firstuse = stm->firstuse ?
++		jiffies_to_clock_t(jiffies - stm->firstuse) : 0;
+ 	dtm->expires = jiffies_to_clock_t(stm->expires);
+ }
+ 
+diff --git a/include/net/espintcp.h b/include/net/espintcp.h
+index dd7026a00066..0335bbd76552 100644
+--- a/include/net/espintcp.h
++++ b/include/net/espintcp.h
+@@ -25,6 +25,7 @@ struct espintcp_ctx {
+ 	struct espintcp_msg partial;
+ 	void (*saved_data_ready)(struct sock *sk);
+ 	void (*saved_write_space)(struct sock *sk);
++	void (*saved_destruct)(struct sock *sk);
+ 	struct work_struct work;
+ 	bool tx_running;
+ };
+diff --git a/include/net/ip_fib.h b/include/net/ip_fib.h
+index 6a1ae49809de..464772420206 100644
+--- a/include/net/ip_fib.h
++++ b/include/net/ip_fib.h
+@@ -257,7 +257,6 @@ struct fib_dump_filter {
+ 	u32			table_id;
+ 	/* filter_set is an optimization that an entry is set */
+ 	bool			filter_set;
+-	bool			dump_all_families;
+ 	bool			dump_routes;
+ 	bool			dump_exceptions;
+ 	unsigned char		protocol;
+@@ -448,6 +447,16 @@ static inline int fib_num_tclassid_users(struct net *net)
+ #endif
+ int fib_unmerge(struct net *net);
+ 
++static inline bool nhc_l3mdev_matches_dev(const struct fib_nh_common *nhc,
++const struct net_device *dev)
++{
++	if (nhc->nhc_dev == dev ||
++	    l3mdev_master_ifindex_rcu(nhc->nhc_dev) == dev->ifindex)
++		return true;
++
++	return false;
++}
++
+ /* Exported by fib_semantics.c */
+ int ip_fib_check_default(__be32 gw, struct net_device *dev);
+ int fib_sync_down_dev(struct net_device *dev, unsigned long event, bool force);
+diff --git a/include/net/nexthop.h b/include/net/nexthop.h
+index 331ebbc94fe7..3bb618e5ecf7 100644
+--- a/include/net/nexthop.h
++++ b/include/net/nexthop.h
+@@ -70,6 +70,7 @@ struct nh_grp_entry {
+ };
+ 
+ struct nh_group {
++	struct nh_group		*spare; /* spare group for removals */
+ 	u16			num_nh;
+ 	bool			mpath;
+ 	bool			has_v4;
+@@ -136,21 +137,20 @@ static inline unsigned int nexthop_num_path(const struct nexthop *nh)
+ {
+ 	unsigned int rc = 1;
+ 
+-	if (nexthop_is_multipath(nh)) {
++	if (nh->is_group) {
+ 		struct nh_group *nh_grp;
+ 
+ 		nh_grp = rcu_dereference_rtnl(nh->nh_grp);
+-		rc = nh_grp->num_nh;
++		if (nh_grp->mpath)
++			rc = nh_grp->num_nh;
+ 	}
+ 
+ 	return rc;
+ }
+ 
+ static inline
+-struct nexthop *nexthop_mpath_select(const struct nexthop *nh, int nhsel)
++struct nexthop *nexthop_mpath_select(const struct nh_group *nhg, int nhsel)
+ {
+-	const struct nh_group *nhg = rcu_dereference_rtnl(nh->nh_grp);
+-
+ 	/* for_nexthops macros in fib_semantics.c grabs a pointer to
+ 	 * the nexthop before checking nhsel
+ 	 */
+@@ -185,12 +185,14 @@ static inline bool nexthop_is_blackhole(const struct nexthop *nh)
+ {
+ 	const struct nh_info *nhi;
+ 
+-	if (nexthop_is_multipath(nh)) {
+-		if (nexthop_num_path(nh) > 1)
+-			return false;
+-		nh = nexthop_mpath_select(nh, 0);
+-		if (!nh)
++	if (nh->is_group) {
++		struct nh_group *nh_grp;
++
++		nh_grp = rcu_dereference_rtnl(nh->nh_grp);
++		if (nh_grp->num_nh > 1)
+ 			return false;
++
++		nh = nh_grp->nh_entries[0].nh;
+ 	}
+ 
+ 	nhi = rcu_dereference_rtnl(nh->nh_info);
+@@ -216,16 +218,46 @@ struct fib_nh_common *nexthop_fib_nhc(struct nexthop *nh, int nhsel)
+ 	BUILD_BUG_ON(offsetof(struct fib_nh, nh_common) != 0);
+ 	BUILD_BUG_ON(offsetof(struct fib6_nh, nh_common) != 0);
+ 
+-	if (nexthop_is_multipath(nh)) {
+-		nh = nexthop_mpath_select(nh, nhsel);
+-		if (!nh)
+-			return NULL;
++	if (nh->is_group) {
++		struct nh_group *nh_grp;
++
++		nh_grp = rcu_dereference_rtnl(nh->nh_grp);
++		if (nh_grp->mpath) {
++			nh = nexthop_mpath_select(nh_grp, nhsel);
++			if (!nh)
++				return NULL;
++		}
+ 	}
+ 
+ 	nhi = rcu_dereference_rtnl(nh->nh_info);
+ 	return &nhi->fib_nhc;
+ }
+ 
++static inline bool nexthop_uses_dev(const struct nexthop *nh,
++				    const struct net_device *dev)
++{
++	struct nh_info *nhi;
++
++	if (nh->is_group) {
++		struct nh_group *nhg = rcu_dereference(nh->nh_grp);
++		int i;
++
++		for (i = 0; i < nhg->num_nh; i++) {
++			struct nexthop *nhe = nhg->nh_entries[i].nh;
++
++			nhi = rcu_dereference(nhe->nh_info);
++			if (nhc_l3mdev_matches_dev(&nhi->fib_nhc, dev))
++				return true;
++		}
++	} else {
++		nhi = rcu_dereference(nh->nh_info);
++		if (nhc_l3mdev_matches_dev(&nhi->fib_nhc, dev))
++			return true;
++	}
++
++	return false;
++}
++
+ static inline unsigned int fib_info_num_path(const struct fib_info *fi)
+ {
+ 	if (unlikely(fi->nh))
+@@ -263,8 +295,11 @@ static inline struct fib6_nh *nexthop_fib6_nh(struct nexthop *nh)
+ {
+ 	struct nh_info *nhi;
+ 
+-	if (nexthop_is_multipath(nh)) {
+-		nh = nexthop_mpath_select(nh, 0);
++	if (nh->is_group) {
++		struct nh_group *nh_grp;
++
++		nh_grp = rcu_dereference_rtnl(nh->nh_grp);
++		nh = nexthop_mpath_select(nh_grp, 0);
+ 		if (!nh)
+ 			return NULL;
+ 	}
+diff --git a/include/net/tls.h b/include/net/tls.h
+index bf9eb4823933..18cd4f418464 100644
+--- a/include/net/tls.h
++++ b/include/net/tls.h
+@@ -135,6 +135,8 @@ struct tls_sw_context_tx {
+ 	struct tls_rec *open_rec;
+ 	struct list_head tx_list;
+ 	atomic_t encrypt_pending;
++	/* protect crypto_wait with encrypt_pending */
++	spinlock_t encrypt_compl_lock;
+ 	int async_notify;
+ 	u8 async_capable:1;
+ 
+@@ -155,6 +157,8 @@ struct tls_sw_context_rx {
+ 	u8 async_capable:1;
+ 	u8 decrypted:1;
+ 	atomic_t decrypt_pending;
++	/* protect crypto_wait with decrypt_pending*/
++	spinlock_t decrypt_compl_lock;
+ 	bool async_notify;
+ };
+ 
+diff --git a/include/rdma/uverbs_std_types.h b/include/rdma/uverbs_std_types.h
+index 1b28ce1aba07..325fdaa3bb66 100644
+--- a/include/rdma/uverbs_std_types.h
++++ b/include/rdma/uverbs_std_types.h
+@@ -88,7 +88,7 @@ struct ib_uobject *__uobj_get_destroy(const struct uverbs_api_object *obj,
+ 
+ static inline void uobj_put_destroy(struct ib_uobject *uobj)
+ {
+-	rdma_lookup_put_uobject(uobj, UVERBS_LOOKUP_WRITE);
++	rdma_lookup_put_uobject(uobj, UVERBS_LOOKUP_DESTROY);
+ }
+ 
+ static inline void uobj_put_read(struct ib_uobject *uobj)
+diff --git a/include/uapi/linux/xfrm.h b/include/uapi/linux/xfrm.h
+index 5f3b9fec7b5f..ff7cfdc6cb44 100644
+--- a/include/uapi/linux/xfrm.h
++++ b/include/uapi/linux/xfrm.h
+@@ -304,7 +304,7 @@ enum xfrm_attr_type_t {
+ 	XFRMA_PROTO,		/* __u8 */
+ 	XFRMA_ADDRESS_FILTER,	/* struct xfrm_address_filter */
+ 	XFRMA_PAD,
+-	XFRMA_OFFLOAD_DEV,	/* struct xfrm_state_offload */
++	XFRMA_OFFLOAD_DEV,	/* struct xfrm_user_offload */
+ 	XFRMA_SET_MARK,		/* __u32 */
+ 	XFRMA_SET_MARK_MASK,	/* __u32 */
+ 	XFRMA_IF_ID,		/* __u32 */
+diff --git a/mm/khugepaged.c b/mm/khugepaged.c
+index b679908743cb..ba059e68cf50 100644
+--- a/mm/khugepaged.c
++++ b/mm/khugepaged.c
+@@ -1673,6 +1673,7 @@ static void collapse_file(struct mm_struct *mm,
+ 		if (page_has_private(page) &&
+ 		    !try_to_release_page(page, GFP_KERNEL)) {
+ 			result = SCAN_PAGE_HAS_PRIVATE;
++			putback_lru_page(page);
+ 			goto out_unlock;
+ 		}
+ 
+diff --git a/net/ax25/af_ax25.c b/net/ax25/af_ax25.c
+index ff57ea89c27e..fd91cd34f25e 100644
+--- a/net/ax25/af_ax25.c
++++ b/net/ax25/af_ax25.c
+@@ -635,8 +635,10 @@ static int ax25_setsockopt(struct socket *sock, int level, int optname,
+ 		break;
+ 
+ 	case SO_BINDTODEVICE:
+-		if (optlen > IFNAMSIZ)
+-			optlen = IFNAMSIZ;
++		if (optlen > IFNAMSIZ - 1)
++			optlen = IFNAMSIZ - 1;
++
++		memset(devname, 0, sizeof(devname));
+ 
+ 		if (copy_from_user(devname, optval, optlen)) {
+ 			res = -EFAULT;
+diff --git a/net/bridge/netfilter/nft_reject_bridge.c b/net/bridge/netfilter/nft_reject_bridge.c
+index b325b569e761..f48cf4cfb80f 100644
+--- a/net/bridge/netfilter/nft_reject_bridge.c
++++ b/net/bridge/netfilter/nft_reject_bridge.c
+@@ -31,6 +31,12 @@ static void nft_reject_br_push_etherhdr(struct sk_buff *oldskb,
+ 	ether_addr_copy(eth->h_dest, eth_hdr(oldskb)->h_source);
+ 	eth->h_proto = eth_hdr(oldskb)->h_proto;
+ 	skb_pull(nskb, ETH_HLEN);
++
++	if (skb_vlan_tag_present(oldskb)) {
++		u16 vid = skb_vlan_tag_get(oldskb);
++
++		__vlan_hwaccel_put_tag(nskb, oldskb->vlan_proto, vid);
++	}
+ }
+ 
+ static int nft_bridge_iphdr_validate(struct sk_buff *skb)
+diff --git a/net/ceph/osd_client.c b/net/ceph/osd_client.c
+index af868d3923b9..834019dbc6b1 100644
+--- a/net/ceph/osd_client.c
++++ b/net/ceph/osd_client.c
+@@ -3652,7 +3652,9 @@ static void handle_reply(struct ceph_osd *osd, struct ceph_msg *msg)
+ 		 * supported.
+ 		 */
+ 		req->r_t.target_oloc.pool = m.redirect.oloc.pool;
+-		req->r_flags |= CEPH_OSD_FLAG_REDIRECTED;
++		req->r_flags |= CEPH_OSD_FLAG_REDIRECTED |
++				CEPH_OSD_FLAG_IGNORE_OVERLAY |
++				CEPH_OSD_FLAG_IGNORE_CACHE;
+ 		req->r_tid = 0;
+ 		__submit_request(req, false);
+ 		goto out_unlock_osdc;
+diff --git a/net/core/dev.c b/net/core/dev.c
+index c7047b40f569..87fd5424e205 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -4988,11 +4988,12 @@ static inline int nf_ingress(struct sk_buff *skb, struct packet_type **pt_prev,
+ 	return 0;
+ }
+ 
+-static int __netif_receive_skb_core(struct sk_buff *skb, bool pfmemalloc,
++static int __netif_receive_skb_core(struct sk_buff **pskb, bool pfmemalloc,
+ 				    struct packet_type **ppt_prev)
+ {
+ 	struct packet_type *ptype, *pt_prev;
+ 	rx_handler_func_t *rx_handler;
++	struct sk_buff *skb = *pskb;
+ 	struct net_device *orig_dev;
+ 	bool deliver_exact = false;
+ 	int ret = NET_RX_DROP;
+@@ -5023,8 +5024,10 @@ another_round:
+ 		ret2 = do_xdp_generic(rcu_dereference(skb->dev->xdp_prog), skb);
+ 		preempt_enable();
+ 
+-		if (ret2 != XDP_PASS)
+-			return NET_RX_DROP;
++		if (ret2 != XDP_PASS) {
++			ret = NET_RX_DROP;
++			goto out;
++		}
+ 		skb_reset_mac_len(skb);
+ 	}
+ 
+@@ -5174,6 +5177,13 @@ drop:
+ 	}
+ 
+ out:
++	/* The invariant here is that if *ppt_prev is not NULL
++	 * then skb should also be non-NULL.
++	 *
++	 * Apparently *ppt_prev assignment above holds this invariant due to
++	 * skb dereferencing near it.
++	 */
++	*pskb = skb;
+ 	return ret;
+ }
+ 
+@@ -5183,7 +5193,7 @@ static int __netif_receive_skb_one_core(struct sk_buff *skb, bool pfmemalloc)
+ 	struct packet_type *pt_prev = NULL;
+ 	int ret;
+ 
+-	ret = __netif_receive_skb_core(skb, pfmemalloc, &pt_prev);
++	ret = __netif_receive_skb_core(&skb, pfmemalloc, &pt_prev);
+ 	if (pt_prev)
+ 		ret = INDIRECT_CALL_INET(pt_prev->func, ipv6_rcv, ip_rcv, skb,
+ 					 skb->dev, pt_prev, orig_dev);
+@@ -5261,7 +5271,7 @@ static void __netif_receive_skb_list_core(struct list_head *head, bool pfmemallo
+ 		struct packet_type *pt_prev = NULL;
+ 
+ 		skb_list_del_init(skb);
+-		__netif_receive_skb_core(skb, pfmemalloc, &pt_prev);
++		__netif_receive_skb_core(&skb, pfmemalloc, &pt_prev);
+ 		if (!pt_prev)
+ 			continue;
+ 		if (pt_curr != pt_prev || od_curr != orig_dev) {
+diff --git a/net/dsa/slave.c b/net/dsa/slave.c
+index ddc0f9236928..e2a3d198e8f5 100644
+--- a/net/dsa/slave.c
++++ b/net/dsa/slave.c
+@@ -1393,6 +1393,7 @@ int dsa_slave_create(struct dsa_port *port)
+ 	if (ds->ops->port_vlan_add && ds->ops->port_vlan_del)
+ 		slave_dev->features |= NETIF_F_HW_VLAN_CTAG_FILTER;
+ 	slave_dev->hw_features |= NETIF_F_HW_TC;
++	slave_dev->features |= NETIF_F_LLTX;
+ 	slave_dev->ethtool_ops = &dsa_slave_ethtool_ops;
+ 	if (!IS_ERR_OR_NULL(port->mac))
+ 		ether_addr_copy(slave_dev->dev_addr, port->mac);
+diff --git a/net/dsa/tag_mtk.c b/net/dsa/tag_mtk.c
+index b5705cba8318..d6619edd53e5 100644
+--- a/net/dsa/tag_mtk.c
++++ b/net/dsa/tag_mtk.c
+@@ -15,6 +15,7 @@
+ #define MTK_HDR_XMIT_TAGGED_TPID_8100	1
+ #define MTK_HDR_RECV_SOURCE_PORT_MASK	GENMASK(2, 0)
+ #define MTK_HDR_XMIT_DP_BIT_MASK	GENMASK(5, 0)
++#define MTK_HDR_XMIT_SA_DIS		BIT(6)
+ 
+ static struct sk_buff *mtk_tag_xmit(struct sk_buff *skb,
+ 				    struct net_device *dev)
+@@ -22,6 +23,9 @@ static struct sk_buff *mtk_tag_xmit(struct sk_buff *skb,
+ 	struct dsa_port *dp = dsa_slave_to_port(dev);
+ 	u8 *mtk_tag;
+ 	bool is_vlan_skb = true;
++	unsigned char *dest = eth_hdr(skb)->h_dest;
++	bool is_multicast_skb = is_multicast_ether_addr(dest) &&
++				!is_broadcast_ether_addr(dest);
+ 
+ 	/* Build the special tag after the MAC Source Address. If VLAN header
+ 	 * is present, it's required that VLAN header and special tag is
+@@ -47,6 +51,10 @@ static struct sk_buff *mtk_tag_xmit(struct sk_buff *skb,
+ 		     MTK_HDR_XMIT_UNTAGGED;
+ 	mtk_tag[1] = (1 << dp->index) & MTK_HDR_XMIT_DP_BIT_MASK;
+ 
++	/* Disable SA learning for multicast frames */
++	if (unlikely(is_multicast_skb))
++		mtk_tag[1] |= MTK_HDR_XMIT_SA_DIS;
++
+ 	/* Tag control information is kept for 802.1Q */
+ 	if (!is_vlan_skb) {
+ 		mtk_tag[2] = 0;
+@@ -61,6 +69,9 @@ static struct sk_buff *mtk_tag_rcv(struct sk_buff *skb, struct net_device *dev,
+ {
+ 	int port;
+ 	__be16 *phdr, hdr;
++	unsigned char *dest = eth_hdr(skb)->h_dest;
++	bool is_multicast_skb = is_multicast_ether_addr(dest) &&
++				!is_broadcast_ether_addr(dest);
+ 
+ 	if (unlikely(!pskb_may_pull(skb, MTK_HDR_LEN)))
+ 		return NULL;
+@@ -86,6 +97,10 @@ static struct sk_buff *mtk_tag_rcv(struct sk_buff *skb, struct net_device *dev,
+ 	if (!skb->dev)
+ 		return NULL;
+ 
++	/* Only unicast or broadcast frames are offloaded */
++	if (likely(!is_multicast_skb))
++		skb->offload_fwd_mark = 1;
++
+ 	return skb;
+ }
+ 
+diff --git a/net/ethtool/netlink.c b/net/ethtool/netlink.c
+index fc9e0b806889..d863dffbe53c 100644
+--- a/net/ethtool/netlink.c
++++ b/net/ethtool/netlink.c
+@@ -334,7 +334,7 @@ static int ethnl_default_doit(struct sk_buff *skb, struct genl_info *info)
+ 	ret = ops->reply_size(req_info, reply_data);
+ 	if (ret < 0)
+ 		goto err_cleanup;
+-	reply_len = ret;
++	reply_len = ret + ethnl_reply_header_size();
+ 	ret = -ENOMEM;
+ 	rskb = ethnl_reply_init(reply_len, req_info->dev, ops->reply_cmd,
+ 				ops->hdr_attr, info, &reply_payload);
+@@ -573,7 +573,7 @@ static void ethnl_default_notify(struct net_device *dev, unsigned int cmd,
+ 	ret = ops->reply_size(req_info, reply_data);
+ 	if (ret < 0)
+ 		goto err_cleanup;
+-	reply_len = ret;
++	reply_len = ret + ethnl_reply_header_size();
+ 	ret = -ENOMEM;
+ 	skb = genlmsg_new(reply_len, GFP_KERNEL);
+ 	if (!skb)
+diff --git a/net/ethtool/strset.c b/net/ethtool/strset.c
+index 8e5911887b4c..fb7b3585458d 100644
+--- a/net/ethtool/strset.c
++++ b/net/ethtool/strset.c
+@@ -309,7 +309,6 @@ static int strset_reply_size(const struct ethnl_req_info *req_base,
+ 	int len = 0;
+ 	int ret;
+ 
+-	len += ethnl_reply_header_size();
+ 	for (i = 0; i < ETH_SS_COUNT; i++) {
+ 		const struct strset_info *set_info = &data->sets[i];
+ 
+diff --git a/net/ipv4/esp4_offload.c b/net/ipv4/esp4_offload.c
+index e2e219c7854a..25c8ba6732df 100644
+--- a/net/ipv4/esp4_offload.c
++++ b/net/ipv4/esp4_offload.c
+@@ -63,10 +63,8 @@ static struct sk_buff *esp4_gro_receive(struct list_head *head,
+ 		sp->olen++;
+ 
+ 		xo = xfrm_offload(skb);
+-		if (!xo) {
+-			xfrm_state_put(x);
++		if (!xo)
+ 			goto out_reset;
+-		}
+ 	}
+ 
+ 	xo->flags |= XFRM_GRO;
+diff --git a/net/ipv4/fib_frontend.c b/net/ipv4/fib_frontend.c
+index 213be9c050ad..41079490a118 100644
+--- a/net/ipv4/fib_frontend.c
++++ b/net/ipv4/fib_frontend.c
+@@ -309,17 +309,18 @@ bool fib_info_nh_uses_dev(struct fib_info *fi, const struct net_device *dev)
+ {
+ 	bool dev_match = false;
+ #ifdef CONFIG_IP_ROUTE_MULTIPATH
+-	int ret;
++	if (unlikely(fi->nh)) {
++		dev_match = nexthop_uses_dev(fi->nh, dev);
++	} else {
++		int ret;
+ 
+-	for (ret = 0; ret < fib_info_num_path(fi); ret++) {
+-		const struct fib_nh_common *nhc = fib_info_nhc(fi, ret);
++		for (ret = 0; ret < fib_info_num_path(fi); ret++) {
++			const struct fib_nh_common *nhc = fib_info_nhc(fi, ret);
+ 
+-		if (nhc->nhc_dev == dev) {
+-			dev_match = true;
+-			break;
+-		} else if (l3mdev_master_ifindex_rcu(nhc->nhc_dev) == dev->ifindex) {
+-			dev_match = true;
+-			break;
++			if (nhc_l3mdev_matches_dev(nhc, dev)) {
++				dev_match = true;
++				break;
++			}
+ 		}
+ 	}
+ #else
+@@ -918,7 +919,6 @@ int ip_valid_fib_dump_req(struct net *net, const struct nlmsghdr *nlh,
+ 	else
+ 		filter->dump_exceptions = false;
+ 
+-	filter->dump_all_families = (rtm->rtm_family == AF_UNSPEC);
+ 	filter->flags    = rtm->rtm_flags;
+ 	filter->protocol = rtm->rtm_protocol;
+ 	filter->rt_type  = rtm->rtm_type;
+@@ -990,7 +990,7 @@ static int inet_dump_fib(struct sk_buff *skb, struct netlink_callback *cb)
+ 	if (filter.table_id) {
+ 		tb = fib_get_table(net, filter.table_id);
+ 		if (!tb) {
+-			if (filter.dump_all_families)
++			if (rtnl_msg_family(cb->nlh) != PF_INET)
+ 				return skb->len;
+ 
+ 			NL_SET_ERR_MSG(cb->extack, "ipv4: FIB table does not exist");
+diff --git a/net/ipv4/inet_connection_sock.c b/net/ipv4/inet_connection_sock.c
+index d545fb99a8a1..76afe93904d5 100644
+--- a/net/ipv4/inet_connection_sock.c
++++ b/net/ipv4/inet_connection_sock.c
+@@ -24,17 +24,19 @@
+ #include <net/addrconf.h>
+ 
+ #if IS_ENABLED(CONFIG_IPV6)
+-/* match_wildcard == true:  IPV6_ADDR_ANY equals to any IPv6 addresses if IPv6
+- *                          only, and any IPv4 addresses if not IPv6 only
+- * match_wildcard == false: addresses must be exactly the same, i.e.
+- *                          IPV6_ADDR_ANY only equals to IPV6_ADDR_ANY,
+- *                          and 0.0.0.0 equals to 0.0.0.0 only
++/* match_sk*_wildcard == true:  IPV6_ADDR_ANY equals to any IPv6 addresses
++ *				if IPv6 only, and any IPv4 addresses
++ *				if not IPv6 only
++ * match_sk*_wildcard == false: addresses must be exactly the same, i.e.
++ *				IPV6_ADDR_ANY only equals to IPV6_ADDR_ANY,
++ *				and 0.0.0.0 equals to 0.0.0.0 only
+  */
+ static bool ipv6_rcv_saddr_equal(const struct in6_addr *sk1_rcv_saddr6,
+ 				 const struct in6_addr *sk2_rcv_saddr6,
+ 				 __be32 sk1_rcv_saddr, __be32 sk2_rcv_saddr,
+ 				 bool sk1_ipv6only, bool sk2_ipv6only,
+-				 bool match_wildcard)
++				 bool match_sk1_wildcard,
++				 bool match_sk2_wildcard)
+ {
+ 	int addr_type = ipv6_addr_type(sk1_rcv_saddr6);
+ 	int addr_type2 = sk2_rcv_saddr6 ? ipv6_addr_type(sk2_rcv_saddr6) : IPV6_ADDR_MAPPED;
+@@ -44,8 +46,8 @@ static bool ipv6_rcv_saddr_equal(const struct in6_addr *sk1_rcv_saddr6,
+ 		if (!sk2_ipv6only) {
+ 			if (sk1_rcv_saddr == sk2_rcv_saddr)
+ 				return true;
+-			if (!sk1_rcv_saddr || !sk2_rcv_saddr)
+-				return match_wildcard;
++			return (match_sk1_wildcard && !sk1_rcv_saddr) ||
++				(match_sk2_wildcard && !sk2_rcv_saddr);
+ 		}
+ 		return false;
+ 	}
+@@ -53,11 +55,11 @@ static bool ipv6_rcv_saddr_equal(const struct in6_addr *sk1_rcv_saddr6,
+ 	if (addr_type == IPV6_ADDR_ANY && addr_type2 == IPV6_ADDR_ANY)
+ 		return true;
+ 
+-	if (addr_type2 == IPV6_ADDR_ANY && match_wildcard &&
++	if (addr_type2 == IPV6_ADDR_ANY && match_sk2_wildcard &&
+ 	    !(sk2_ipv6only && addr_type == IPV6_ADDR_MAPPED))
+ 		return true;
+ 
+-	if (addr_type == IPV6_ADDR_ANY && match_wildcard &&
++	if (addr_type == IPV6_ADDR_ANY && match_sk1_wildcard &&
+ 	    !(sk1_ipv6only && addr_type2 == IPV6_ADDR_MAPPED))
+ 		return true;
+ 
+@@ -69,18 +71,19 @@ static bool ipv6_rcv_saddr_equal(const struct in6_addr *sk1_rcv_saddr6,
+ }
+ #endif
+ 
+-/* match_wildcard == true:  0.0.0.0 equals to any IPv4 addresses
+- * match_wildcard == false: addresses must be exactly the same, i.e.
+- *                          0.0.0.0 only equals to 0.0.0.0
++/* match_sk*_wildcard == true:  0.0.0.0 equals to any IPv4 addresses
++ * match_sk*_wildcard == false: addresses must be exactly the same, i.e.
++ *				0.0.0.0 only equals to 0.0.0.0
+  */
+ static bool ipv4_rcv_saddr_equal(__be32 sk1_rcv_saddr, __be32 sk2_rcv_saddr,
+-				 bool sk2_ipv6only, bool match_wildcard)
++				 bool sk2_ipv6only, bool match_sk1_wildcard,
++				 bool match_sk2_wildcard)
+ {
+ 	if (!sk2_ipv6only) {
+ 		if (sk1_rcv_saddr == sk2_rcv_saddr)
+ 			return true;
+-		if (!sk1_rcv_saddr || !sk2_rcv_saddr)
+-			return match_wildcard;
++		return (match_sk1_wildcard && !sk1_rcv_saddr) ||
++			(match_sk2_wildcard && !sk2_rcv_saddr);
+ 	}
+ 	return false;
+ }
+@@ -96,10 +99,12 @@ bool inet_rcv_saddr_equal(const struct sock *sk, const struct sock *sk2,
+ 					    sk2->sk_rcv_saddr,
+ 					    ipv6_only_sock(sk),
+ 					    ipv6_only_sock(sk2),
++					    match_wildcard,
+ 					    match_wildcard);
+ #endif
+ 	return ipv4_rcv_saddr_equal(sk->sk_rcv_saddr, sk2->sk_rcv_saddr,
+-				    ipv6_only_sock(sk2), match_wildcard);
++				    ipv6_only_sock(sk2), match_wildcard,
++				    match_wildcard);
+ }
+ EXPORT_SYMBOL(inet_rcv_saddr_equal);
+ 
+@@ -273,10 +278,10 @@ static inline int sk_reuseport_match(struct inet_bind_bucket *tb,
+ 					    tb->fast_rcv_saddr,
+ 					    sk->sk_rcv_saddr,
+ 					    tb->fast_ipv6_only,
+-					    ipv6_only_sock(sk), true);
++					    ipv6_only_sock(sk), true, false);
+ #endif
+ 	return ipv4_rcv_saddr_equal(tb->fast_rcv_saddr, sk->sk_rcv_saddr,
+-				    ipv6_only_sock(sk), true);
++				    ipv6_only_sock(sk), true, false);
+ }
+ 
+ /* Obtain a reference to a local port for the given sock,
+diff --git a/net/ipv4/ip_vti.c b/net/ipv4/ip_vti.c
+index 1b4e6f298648..1dda7c155c48 100644
+--- a/net/ipv4/ip_vti.c
++++ b/net/ipv4/ip_vti.c
+@@ -93,7 +93,28 @@ static int vti_rcv_proto(struct sk_buff *skb)
+ 
+ static int vti_rcv_tunnel(struct sk_buff *skb)
+ {
+-	return vti_rcv(skb, ip_hdr(skb)->saddr, true);
++	struct ip_tunnel_net *itn = net_generic(dev_net(skb->dev), vti_net_id);
++	const struct iphdr *iph = ip_hdr(skb);
++	struct ip_tunnel *tunnel;
++
++	tunnel = ip_tunnel_lookup(itn, skb->dev->ifindex, TUNNEL_NO_KEY,
++				  iph->saddr, iph->daddr, 0);
++	if (tunnel) {
++		struct tnl_ptk_info tpi = {
++			.proto = htons(ETH_P_IP),
++		};
++
++		if (!xfrm4_policy_check(NULL, XFRM_POLICY_IN, skb))
++			goto drop;
++		if (iptunnel_pull_header(skb, 0, tpi.proto, false))
++			goto drop;
++		return ip_tunnel_rcv(tunnel, skb, &tpi, NULL, false);
++	}
++
++	return -EINVAL;
++drop:
++	kfree_skb(skb);
++	return 0;
+ }
+ 
+ static int vti_rcv_cb(struct sk_buff *skb, int err)
+diff --git a/net/ipv4/ipip.c b/net/ipv4/ipip.c
+index 2f01cf6fa0de..678575adaf3b 100644
+--- a/net/ipv4/ipip.c
++++ b/net/ipv4/ipip.c
+@@ -698,7 +698,7 @@ out:
+ 
+ rtnl_link_failed:
+ #if IS_ENABLED(CONFIG_MPLS)
+-	xfrm4_tunnel_deregister(&mplsip_handler, AF_INET);
++	xfrm4_tunnel_deregister(&mplsip_handler, AF_MPLS);
+ xfrm_tunnel_mplsip_failed:
+ 
+ #endif
+diff --git a/net/ipv4/ipmr.c b/net/ipv4/ipmr.c
+index 6e68def66822..2508b4c37af3 100644
+--- a/net/ipv4/ipmr.c
++++ b/net/ipv4/ipmr.c
+@@ -2611,7 +2611,7 @@ static int ipmr_rtm_dumproute(struct sk_buff *skb, struct netlink_callback *cb)
+ 
+ 		mrt = ipmr_get_table(sock_net(skb->sk), filter.table_id);
+ 		if (!mrt) {
+-			if (filter.dump_all_families)
++			if (rtnl_msg_family(cb->nlh) != RTNL_FAMILY_IPMR)
+ 				return skb->len;
+ 
+ 			NL_SET_ERR_MSG(cb->extack, "ipv4: MR table does not exist");
+diff --git a/net/ipv4/netfilter/nf_nat_pptp.c b/net/ipv4/netfilter/nf_nat_pptp.c
+index b2aeb7bf5dac..2a1e10f4ae93 100644
+--- a/net/ipv4/netfilter/nf_nat_pptp.c
++++ b/net/ipv4/netfilter/nf_nat_pptp.c
+@@ -166,8 +166,7 @@ pptp_outbound_pkt(struct sk_buff *skb,
+ 		break;
+ 	default:
+ 		pr_debug("unknown outbound packet 0x%04x:%s\n", msg,
+-			 msg <= PPTP_MSG_MAX ? pptp_msg_name[msg] :
+-					       pptp_msg_name[0]);
++			 pptp_msg_name(msg));
+ 		/* fall through */
+ 	case PPTP_SET_LINK_INFO:
+ 		/* only need to NAT in case PAC is behind NAT box */
+@@ -268,9 +267,7 @@ pptp_inbound_pkt(struct sk_buff *skb,
+ 		pcid_off = offsetof(union pptp_ctrl_union, setlink.peersCallID);
+ 		break;
+ 	default:
+-		pr_debug("unknown inbound packet %s\n",
+-			 msg <= PPTP_MSG_MAX ? pptp_msg_name[msg] :
+-					       pptp_msg_name[0]);
++		pr_debug("unknown inbound packet %s\n", pptp_msg_name(msg));
+ 		/* fall through */
+ 	case PPTP_START_SESSION_REQUEST:
+ 	case PPTP_START_SESSION_REPLY:
+diff --git a/net/ipv4/nexthop.c b/net/ipv4/nexthop.c
+index d072c326dd64..b6ecb30544f6 100644
+--- a/net/ipv4/nexthop.c
++++ b/net/ipv4/nexthop.c
+@@ -63,9 +63,16 @@ static void nexthop_free_mpath(struct nexthop *nh)
+ 	int i;
+ 
+ 	nhg = rcu_dereference_raw(nh->nh_grp);
+-	for (i = 0; i < nhg->num_nh; ++i)
+-		WARN_ON(nhg->nh_entries[i].nh);
++	for (i = 0; i < nhg->num_nh; ++i) {
++		struct nh_grp_entry *nhge = &nhg->nh_entries[i];
++
++		WARN_ON(!list_empty(&nhge->nh_list));
++		nexthop_put(nhge->nh);
++	}
++
++	WARN_ON(nhg->spare == nhg);
+ 
++	kfree(nhg->spare);
+ 	kfree(nhg);
+ }
+ 
+@@ -276,6 +283,7 @@ out:
+ 	return 0;
+ 
+ nla_put_failure:
++	nlmsg_cancel(skb, nlh);
+ 	return -EMSGSIZE;
+ }
+ 
+@@ -433,7 +441,7 @@ static int nh_check_attr_group(struct net *net, struct nlattr *tb[],
+ 		if (!valid_group_nh(nh, len, extack))
+ 			return -EINVAL;
+ 	}
+-	for (i = NHA_GROUP + 1; i < __NHA_MAX; ++i) {
++	for (i = NHA_GROUP_TYPE + 1; i < __NHA_MAX; ++i) {
+ 		if (!tb[i])
+ 			continue;
+ 
+@@ -693,41 +701,56 @@ static void nh_group_rebalance(struct nh_group *nhg)
+ 	}
+ }
+ 
+-static void remove_nh_grp_entry(struct nh_grp_entry *nhge,
+-				struct nh_group *nhg,
++static void remove_nh_grp_entry(struct net *net, struct nh_grp_entry *nhge,
+ 				struct nl_info *nlinfo)
+ {
++	struct nh_grp_entry *nhges, *new_nhges;
++	struct nexthop *nhp = nhge->nh_parent;
+ 	struct nexthop *nh = nhge->nh;
+-	struct nh_grp_entry *nhges;
+-	bool found = false;
+-	int i;
++	struct nh_group *nhg, *newg;
++	int i, j;
+ 
+ 	WARN_ON(!nh);
+ 
+-	nhges = nhg->nh_entries;
+-	for (i = 0; i < nhg->num_nh; ++i) {
+-		if (found) {
+-			nhges[i-1].nh = nhges[i].nh;
+-			nhges[i-1].weight = nhges[i].weight;
+-			list_del(&nhges[i].nh_list);
+-			list_add(&nhges[i-1].nh_list, &nhges[i-1].nh->grp_list);
+-		} else if (nhg->nh_entries[i].nh == nh) {
+-			found = true;
+-		}
+-	}
++	nhg = rtnl_dereference(nhp->nh_grp);
++	newg = nhg->spare;
+ 
+-	if (WARN_ON(!found))
++	/* last entry, keep it visible and remove the parent */
++	if (nhg->num_nh == 1) {
++		remove_nexthop(net, nhp, nlinfo);
+ 		return;
++	}
++
++	newg->has_v4 = nhg->has_v4;
++	newg->mpath = nhg->mpath;
++	newg->num_nh = nhg->num_nh;
+ 
+-	nhg->num_nh--;
+-	nhg->nh_entries[nhg->num_nh].nh = NULL;
++	/* copy old entries to new except the one getting removed */
++	nhges = nhg->nh_entries;
++	new_nhges = newg->nh_entries;
++	for (i = 0, j = 0; i < nhg->num_nh; ++i) {
++		/* current nexthop getting removed */
++		if (nhg->nh_entries[i].nh == nh) {
++			newg->num_nh--;
++			continue;
++		}
+ 
+-	nh_group_rebalance(nhg);
++		list_del(&nhges[i].nh_list);
++		new_nhges[j].nh_parent = nhges[i].nh_parent;
++		new_nhges[j].nh = nhges[i].nh;
++		new_nhges[j].weight = nhges[i].weight;
++		list_add(&new_nhges[j].nh_list, &new_nhges[j].nh->grp_list);
++		j++;
++	}
+ 
+-	nexthop_put(nh);
++	nh_group_rebalance(newg);
++	rcu_assign_pointer(nhp->nh_grp, newg);
++
++	list_del(&nhge->nh_list);
++	nexthop_put(nhge->nh);
+ 
+ 	if (nlinfo)
+-		nexthop_notify(RTM_NEWNEXTHOP, nhge->nh_parent, nlinfo);
++		nexthop_notify(RTM_NEWNEXTHOP, nhp, nlinfo);
+ }
+ 
+ static void remove_nexthop_from_groups(struct net *net, struct nexthop *nh,
+@@ -735,17 +758,11 @@ static void remove_nexthop_from_groups(struct net *net, struct nexthop *nh,
+ {
+ 	struct nh_grp_entry *nhge, *tmp;
+ 
+-	list_for_each_entry_safe(nhge, tmp, &nh->grp_list, nh_list) {
+-		struct nh_group *nhg;
+-
+-		list_del(&nhge->nh_list);
+-		nhg = rtnl_dereference(nhge->nh_parent->nh_grp);
+-		remove_nh_grp_entry(nhge, nhg, nlinfo);
++	list_for_each_entry_safe(nhge, tmp, &nh->grp_list, nh_list)
++		remove_nh_grp_entry(net, nhge, nlinfo);
+ 
+-		/* if this group has no more entries then remove it */
+-		if (!nhg->num_nh)
+-			remove_nexthop(net, nhge->nh_parent, nlinfo);
+-	}
++	/* make sure all see the newly published array before releasing rtnl */
++	synchronize_rcu();
+ }
+ 
+ static void remove_nexthop_group(struct nexthop *nh, struct nl_info *nlinfo)
+@@ -759,10 +776,7 @@ static void remove_nexthop_group(struct nexthop *nh, struct nl_info *nlinfo)
+ 		if (WARN_ON(!nhge->nh))
+ 			continue;
+ 
+-		list_del(&nhge->nh_list);
+-		nexthop_put(nhge->nh);
+-		nhge->nh = NULL;
+-		nhg->num_nh--;
++		list_del_init(&nhge->nh_list);
+ 	}
+ }
+ 
+@@ -1085,6 +1099,7 @@ static struct nexthop *nexthop_create_group(struct net *net,
+ {
+ 	struct nlattr *grps_attr = cfg->nh_grp;
+ 	struct nexthop_grp *entry = nla_data(grps_attr);
++	u16 num_nh = nla_len(grps_attr) / sizeof(*entry);
+ 	struct nh_group *nhg;
+ 	struct nexthop *nh;
+ 	int i;
+@@ -1095,12 +1110,21 @@ static struct nexthop *nexthop_create_group(struct net *net,
+ 
+ 	nh->is_group = 1;
+ 
+-	nhg = nexthop_grp_alloc(nla_len(grps_attr) / sizeof(*entry));
++	nhg = nexthop_grp_alloc(num_nh);
+ 	if (!nhg) {
+ 		kfree(nh);
+ 		return ERR_PTR(-ENOMEM);
+ 	}
+ 
++	/* spare group used for removals */
++	nhg->spare = nexthop_grp_alloc(num_nh);
++	if (!nhg) {
++		kfree(nhg);
++		kfree(nh);
++		return NULL;
++	}
++	nhg->spare->spare = nhg;
++
+ 	for (i = 0; i < nhg->num_nh; ++i) {
+ 		struct nexthop *nhe;
+ 		struct nh_info *nhi;
+@@ -1132,6 +1156,7 @@ out_no_nh:
+ 	for (; i >= 0; --i)
+ 		nexthop_put(nhg->nh_entries[i].nh);
+ 
++	kfree(nhg->spare);
+ 	kfree(nhg);
+ 	kfree(nh);
+ 
+diff --git a/net/ipv4/route.c b/net/ipv4/route.c
+index ef6b70774fe1..fea6a8a11183 100644
+--- a/net/ipv4/route.c
++++ b/net/ipv4/route.c
+@@ -491,18 +491,16 @@ u32 ip_idents_reserve(u32 hash, int segs)
+ 	atomic_t *p_id = ip_idents + hash % IP_IDENTS_SZ;
+ 	u32 old = READ_ONCE(*p_tstamp);
+ 	u32 now = (u32)jiffies;
+-	u32 new, delta = 0;
++	u32 delta = 0;
+ 
+ 	if (old != now && cmpxchg(p_tstamp, old, now) == old)
+ 		delta = prandom_u32_max(now - old);
+ 
+-	/* Do not use atomic_add_return() as it makes UBSAN unhappy */
+-	do {
+-		old = (u32)atomic_read(p_id);
+-		new = old + delta + segs;
+-	} while (atomic_cmpxchg(p_id, old, new) != old);
+-
+-	return new - segs;
++	/* If UBSAN reports an error there, please make sure your compiler
++	 * supports -fno-strict-overflow before reporting it that was a bug
++	 * in UBSAN, and it has been fixed in GCC-8.
++	 */
++	return atomic_add_return(segs + delta, p_id) - segs;
+ }
+ EXPORT_SYMBOL(ip_idents_reserve);
+ 
+diff --git a/net/ipv6/esp6_offload.c b/net/ipv6/esp6_offload.c
+index fd535053245b..93e086cf058a 100644
+--- a/net/ipv6/esp6_offload.c
++++ b/net/ipv6/esp6_offload.c
+@@ -85,10 +85,8 @@ static struct sk_buff *esp6_gro_receive(struct list_head *head,
+ 		sp->olen++;
+ 
+ 		xo = xfrm_offload(skb);
+-		if (!xo) {
+-			xfrm_state_put(x);
++		if (!xo)
+ 			goto out_reset;
+-		}
+ 	}
+ 
+ 	xo->flags |= XFRM_GRO;
+@@ -123,9 +121,16 @@ static void esp6_gso_encap(struct xfrm_state *x, struct sk_buff *skb)
+ 	struct ip_esp_hdr *esph;
+ 	struct ipv6hdr *iph = ipv6_hdr(skb);
+ 	struct xfrm_offload *xo = xfrm_offload(skb);
+-	int proto = iph->nexthdr;
++	u8 proto = iph->nexthdr;
+ 
+ 	skb_push(skb, -skb_network_offset(skb));
++
++	if (x->outer_mode.encap == XFRM_MODE_TRANSPORT) {
++		__be16 frag;
++
++		ipv6_skip_exthdr(skb, sizeof(struct ipv6hdr), &proto, &frag);
++	}
++
+ 	esph = ip_esp_hdr(skb);
+ 	*skb_mac_header(skb) = IPPROTO_ESP;
+ 
+diff --git a/net/ipv6/ip6_fib.c b/net/ipv6/ip6_fib.c
+index 72abf892302f..9a53590ef79c 100644
+--- a/net/ipv6/ip6_fib.c
++++ b/net/ipv6/ip6_fib.c
+@@ -664,7 +664,7 @@ static int inet6_dump_fib(struct sk_buff *skb, struct netlink_callback *cb)
+ 	if (arg.filter.table_id) {
+ 		tb = fib6_get_table(net, arg.filter.table_id);
+ 		if (!tb) {
+-			if (arg.filter.dump_all_families)
++			if (rtnl_msg_family(cb->nlh) != PF_INET6)
+ 				goto out;
+ 
+ 			NL_SET_ERR_MSG_MOD(cb->extack, "FIB table does not exist");
+diff --git a/net/ipv6/ip6mr.c b/net/ipv6/ip6mr.c
+index bfa49ff70531..2ddb7c513e54 100644
+--- a/net/ipv6/ip6mr.c
++++ b/net/ipv6/ip6mr.c
+@@ -2501,7 +2501,7 @@ static int ip6mr_rtm_dumproute(struct sk_buff *skb, struct netlink_callback *cb)
+ 
+ 		mrt = ip6mr_get_table(sock_net(skb->sk), filter.table_id);
+ 		if (!mrt) {
+-			if (filter.dump_all_families)
++			if (rtnl_msg_family(cb->nlh) != RTNL_FAMILY_IP6MR)
+ 				return skb->len;
+ 
+ 			NL_SET_ERR_MSG_MOD(cb->extack, "MR table does not exist");
+diff --git a/net/mac80211/mesh_hwmp.c b/net/mac80211/mesh_hwmp.c
+index 38a0383dfbcf..aa5150929996 100644
+--- a/net/mac80211/mesh_hwmp.c
++++ b/net/mac80211/mesh_hwmp.c
+@@ -1103,7 +1103,14 @@ void mesh_path_start_discovery(struct ieee80211_sub_if_data *sdata)
+ 	mesh_path_sel_frame_tx(MPATH_PREQ, 0, sdata->vif.addr, ifmsh->sn,
+ 			       target_flags, mpath->dst, mpath->sn, da, 0,
+ 			       ttl, lifetime, 0, ifmsh->preq_id++, sdata);
++
++	spin_lock_bh(&mpath->state_lock);
++	if (mpath->flags & MESH_PATH_DELETED) {
++		spin_unlock_bh(&mpath->state_lock);
++		goto enddiscovery;
++	}
+ 	mod_timer(&mpath->timer, jiffies + mpath->discovery_timeout);
++	spin_unlock_bh(&mpath->state_lock);
+ 
+ enddiscovery:
+ 	rcu_read_unlock();
+diff --git a/net/netfilter/ipset/ip_set_list_set.c b/net/netfilter/ipset/ip_set_list_set.c
+index cd747c0962fd..5a67f7966574 100644
+--- a/net/netfilter/ipset/ip_set_list_set.c
++++ b/net/netfilter/ipset/ip_set_list_set.c
+@@ -59,7 +59,7 @@ list_set_ktest(struct ip_set *set, const struct sk_buff *skb,
+ 	/* Don't lookup sub-counters at all */
+ 	opt->cmdflags &= ~IPSET_FLAG_MATCH_COUNTERS;
+ 	if (opt->cmdflags & IPSET_FLAG_SKIP_SUBCOUNTER_UPDATE)
+-		opt->cmdflags &= ~IPSET_FLAG_SKIP_COUNTER_UPDATE;
++		opt->cmdflags |= IPSET_FLAG_SKIP_COUNTER_UPDATE;
+ 	list_for_each_entry_rcu(e, &map->members, list) {
+ 		ret = ip_set_test(e->id, skb, par, opt);
+ 		if (ret <= 0)
+diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c
+index d11a58348133..7c503b4751c4 100644
+--- a/net/netfilter/nf_conntrack_core.c
++++ b/net/netfilter/nf_conntrack_core.c
+@@ -2014,22 +2014,18 @@ static void nf_conntrack_attach(struct sk_buff *nskb, const struct sk_buff *skb)
+ 	nf_conntrack_get(skb_nfct(nskb));
+ }
+ 
+-static int nf_conntrack_update(struct net *net, struct sk_buff *skb)
++static int __nf_conntrack_update(struct net *net, struct sk_buff *skb,
++				 struct nf_conn *ct,
++				 enum ip_conntrack_info ctinfo)
+ {
+ 	struct nf_conntrack_tuple_hash *h;
+ 	struct nf_conntrack_tuple tuple;
+-	enum ip_conntrack_info ctinfo;
+ 	struct nf_nat_hook *nat_hook;
+ 	unsigned int status;
+-	struct nf_conn *ct;
+ 	int dataoff;
+ 	u16 l3num;
+ 	u8 l4num;
+ 
+-	ct = nf_ct_get(skb, &ctinfo);
+-	if (!ct || nf_ct_is_confirmed(ct))
+-		return 0;
+-
+ 	l3num = nf_ct_l3num(ct);
+ 
+ 	dataoff = get_l4proto(skb, skb_network_offset(skb), l3num, &l4num);
+@@ -2086,6 +2082,76 @@ static int nf_conntrack_update(struct net *net, struct sk_buff *skb)
+ 	return 0;
+ }
+ 
++/* This packet is coming from userspace via nf_queue, complete the packet
++ * processing after the helper invocation in nf_confirm().
++ */
++static int nf_confirm_cthelper(struct sk_buff *skb, struct nf_conn *ct,
++			       enum ip_conntrack_info ctinfo)
++{
++	const struct nf_conntrack_helper *helper;
++	const struct nf_conn_help *help;
++	int protoff;
++
++	help = nfct_help(ct);
++	if (!help)
++		return 0;
++
++	helper = rcu_dereference(help->helper);
++	if (!(helper->flags & NF_CT_HELPER_F_USERSPACE))
++		return 0;
++
++	switch (nf_ct_l3num(ct)) {
++	case NFPROTO_IPV4:
++		protoff = skb_network_offset(skb) + ip_hdrlen(skb);
++		break;
++#if IS_ENABLED(CONFIG_IPV6)
++	case NFPROTO_IPV6: {
++		__be16 frag_off;
++		u8 pnum;
++
++		pnum = ipv6_hdr(skb)->nexthdr;
++		protoff = ipv6_skip_exthdr(skb, sizeof(struct ipv6hdr), &pnum,
++					   &frag_off);
++		if (protoff < 0 || (frag_off & htons(~0x7)) != 0)
++			return 0;
++		break;
++	}
++#endif
++	default:
++		return 0;
++	}
++
++	if (test_bit(IPS_SEQ_ADJUST_BIT, &ct->status) &&
++	    !nf_is_loopback_packet(skb)) {
++		if (!nf_ct_seq_adjust(skb, ct, ctinfo, protoff)) {
++			NF_CT_STAT_INC_ATOMIC(nf_ct_net(ct), drop);
++			return -1;
++		}
++	}
++
++	/* We've seen it coming out the other side: confirm it */
++	return nf_conntrack_confirm(skb) == NF_DROP ? - 1 : 0;
++}
++
++static int nf_conntrack_update(struct net *net, struct sk_buff *skb)
++{
++	enum ip_conntrack_info ctinfo;
++	struct nf_conn *ct;
++	int err;
++
++	ct = nf_ct_get(skb, &ctinfo);
++	if (!ct)
++		return 0;
++
++	if (!nf_ct_is_confirmed(ct)) {
++		err = __nf_conntrack_update(net, skb, ct, ctinfo);
++		if (err < 0)
++			return err;
++	}
++
++	return nf_confirm_cthelper(skb, ct, ctinfo);
++}
++
+ static bool nf_conntrack_get_tuple_skb(struct nf_conntrack_tuple *dst_tuple,
+ 				       const struct sk_buff *skb)
+ {
+diff --git a/net/netfilter/nf_conntrack_pptp.c b/net/netfilter/nf_conntrack_pptp.c
+index a971183f11af..1f44d523b512 100644
+--- a/net/netfilter/nf_conntrack_pptp.c
++++ b/net/netfilter/nf_conntrack_pptp.c
+@@ -72,24 +72,32 @@ EXPORT_SYMBOL_GPL(nf_nat_pptp_hook_expectfn);
+ 
+ #if defined(DEBUG) || defined(CONFIG_DYNAMIC_DEBUG)
+ /* PptpControlMessageType names */
+-const char *const pptp_msg_name[] = {
+-	"UNKNOWN_MESSAGE",
+-	"START_SESSION_REQUEST",
+-	"START_SESSION_REPLY",
+-	"STOP_SESSION_REQUEST",
+-	"STOP_SESSION_REPLY",
+-	"ECHO_REQUEST",
+-	"ECHO_REPLY",
+-	"OUT_CALL_REQUEST",
+-	"OUT_CALL_REPLY",
+-	"IN_CALL_REQUEST",
+-	"IN_CALL_REPLY",
+-	"IN_CALL_CONNECT",
+-	"CALL_CLEAR_REQUEST",
+-	"CALL_DISCONNECT_NOTIFY",
+-	"WAN_ERROR_NOTIFY",
+-	"SET_LINK_INFO"
++static const char *const pptp_msg_name_array[PPTP_MSG_MAX + 1] = {
++	[0]				= "UNKNOWN_MESSAGE",
++	[PPTP_START_SESSION_REQUEST]	= "START_SESSION_REQUEST",
++	[PPTP_START_SESSION_REPLY]	= "START_SESSION_REPLY",
++	[PPTP_STOP_SESSION_REQUEST]	= "STOP_SESSION_REQUEST",
++	[PPTP_STOP_SESSION_REPLY]	= "STOP_SESSION_REPLY",
++	[PPTP_ECHO_REQUEST]		= "ECHO_REQUEST",
++	[PPTP_ECHO_REPLY]		= "ECHO_REPLY",
++	[PPTP_OUT_CALL_REQUEST]		= "OUT_CALL_REQUEST",
++	[PPTP_OUT_CALL_REPLY]		= "OUT_CALL_REPLY",
++	[PPTP_IN_CALL_REQUEST]		= "IN_CALL_REQUEST",
++	[PPTP_IN_CALL_REPLY]		= "IN_CALL_REPLY",
++	[PPTP_IN_CALL_CONNECT]		= "IN_CALL_CONNECT",
++	[PPTP_CALL_CLEAR_REQUEST]	= "CALL_CLEAR_REQUEST",
++	[PPTP_CALL_DISCONNECT_NOTIFY]	= "CALL_DISCONNECT_NOTIFY",
++	[PPTP_WAN_ERROR_NOTIFY]		= "WAN_ERROR_NOTIFY",
++	[PPTP_SET_LINK_INFO]		= "SET_LINK_INFO"
+ };
++
++const char *pptp_msg_name(u_int16_t msg)
++{
++	if (msg > PPTP_MSG_MAX)
++		return pptp_msg_name_array[0];
++
++	return pptp_msg_name_array[msg];
++}
+ EXPORT_SYMBOL(pptp_msg_name);
+ #endif
+ 
+@@ -276,7 +284,7 @@ pptp_inbound_pkt(struct sk_buff *skb, unsigned int protoff,
+ 	typeof(nf_nat_pptp_hook_inbound) nf_nat_pptp_inbound;
+ 
+ 	msg = ntohs(ctlh->messageType);
+-	pr_debug("inbound control message %s\n", pptp_msg_name[msg]);
++	pr_debug("inbound control message %s\n", pptp_msg_name(msg));
+ 
+ 	switch (msg) {
+ 	case PPTP_START_SESSION_REPLY:
+@@ -311,7 +319,7 @@ pptp_inbound_pkt(struct sk_buff *skb, unsigned int protoff,
+ 		pcid = pptpReq->ocack.peersCallID;
+ 		if (info->pns_call_id != pcid)
+ 			goto invalid;
+-		pr_debug("%s, CID=%X, PCID=%X\n", pptp_msg_name[msg],
++		pr_debug("%s, CID=%X, PCID=%X\n", pptp_msg_name(msg),
+ 			 ntohs(cid), ntohs(pcid));
+ 
+ 		if (pptpReq->ocack.resultCode == PPTP_OUTCALL_CONNECT) {
+@@ -328,7 +336,7 @@ pptp_inbound_pkt(struct sk_buff *skb, unsigned int protoff,
+ 			goto invalid;
+ 
+ 		cid = pptpReq->icreq.callID;
+-		pr_debug("%s, CID=%X\n", pptp_msg_name[msg], ntohs(cid));
++		pr_debug("%s, CID=%X\n", pptp_msg_name(msg), ntohs(cid));
+ 		info->cstate = PPTP_CALL_IN_REQ;
+ 		info->pac_call_id = cid;
+ 		break;
+@@ -347,7 +355,7 @@ pptp_inbound_pkt(struct sk_buff *skb, unsigned int protoff,
+ 		if (info->pns_call_id != pcid)
+ 			goto invalid;
+ 
+-		pr_debug("%s, PCID=%X\n", pptp_msg_name[msg], ntohs(pcid));
++		pr_debug("%s, PCID=%X\n", pptp_msg_name(msg), ntohs(pcid));
+ 		info->cstate = PPTP_CALL_IN_CONF;
+ 
+ 		/* we expect a GRE connection from PAC to PNS */
+@@ -357,7 +365,7 @@ pptp_inbound_pkt(struct sk_buff *skb, unsigned int protoff,
+ 	case PPTP_CALL_DISCONNECT_NOTIFY:
+ 		/* server confirms disconnect */
+ 		cid = pptpReq->disc.callID;
+-		pr_debug("%s, CID=%X\n", pptp_msg_name[msg], ntohs(cid));
++		pr_debug("%s, CID=%X\n", pptp_msg_name(msg), ntohs(cid));
+ 		info->cstate = PPTP_CALL_NONE;
+ 
+ 		/* untrack this call id, unexpect GRE packets */
+@@ -384,7 +392,7 @@ pptp_inbound_pkt(struct sk_buff *skb, unsigned int protoff,
+ invalid:
+ 	pr_debug("invalid %s: type=%d cid=%u pcid=%u "
+ 		 "cstate=%d sstate=%d pns_cid=%u pac_cid=%u\n",
+-		 msg <= PPTP_MSG_MAX ? pptp_msg_name[msg] : pptp_msg_name[0],
++		 pptp_msg_name(msg),
+ 		 msg, ntohs(cid), ntohs(pcid),  info->cstate, info->sstate,
+ 		 ntohs(info->pns_call_id), ntohs(info->pac_call_id));
+ 	return NF_ACCEPT;
+@@ -404,7 +412,7 @@ pptp_outbound_pkt(struct sk_buff *skb, unsigned int protoff,
+ 	typeof(nf_nat_pptp_hook_outbound) nf_nat_pptp_outbound;
+ 
+ 	msg = ntohs(ctlh->messageType);
+-	pr_debug("outbound control message %s\n", pptp_msg_name[msg]);
++	pr_debug("outbound control message %s\n", pptp_msg_name(msg));
+ 
+ 	switch (msg) {
+ 	case PPTP_START_SESSION_REQUEST:
+@@ -426,7 +434,7 @@ pptp_outbound_pkt(struct sk_buff *skb, unsigned int protoff,
+ 		info->cstate = PPTP_CALL_OUT_REQ;
+ 		/* track PNS call id */
+ 		cid = pptpReq->ocreq.callID;
+-		pr_debug("%s, CID=%X\n", pptp_msg_name[msg], ntohs(cid));
++		pr_debug("%s, CID=%X\n", pptp_msg_name(msg), ntohs(cid));
+ 		info->pns_call_id = cid;
+ 		break;
+ 
+@@ -440,7 +448,7 @@ pptp_outbound_pkt(struct sk_buff *skb, unsigned int protoff,
+ 		pcid = pptpReq->icack.peersCallID;
+ 		if (info->pac_call_id != pcid)
+ 			goto invalid;
+-		pr_debug("%s, CID=%X PCID=%X\n", pptp_msg_name[msg],
++		pr_debug("%s, CID=%X PCID=%X\n", pptp_msg_name(msg),
+ 			 ntohs(cid), ntohs(pcid));
+ 
+ 		if (pptpReq->icack.resultCode == PPTP_INCALL_ACCEPT) {
+@@ -480,7 +488,7 @@ pptp_outbound_pkt(struct sk_buff *skb, unsigned int protoff,
+ invalid:
+ 	pr_debug("invalid %s: type=%d cid=%u pcid=%u "
+ 		 "cstate=%d sstate=%d pns_cid=%u pac_cid=%u\n",
+-		 msg <= PPTP_MSG_MAX ? pptp_msg_name[msg] : pptp_msg_name[0],
++		 pptp_msg_name(msg),
+ 		 msg, ntohs(cid), ntohs(pcid),  info->cstate, info->sstate,
+ 		 ntohs(info->pns_call_id), ntohs(info->pac_call_id));
+ 	return NF_ACCEPT;
+diff --git a/net/netfilter/nfnetlink_cthelper.c b/net/netfilter/nfnetlink_cthelper.c
+index a5f294aa8e4c..5b0d0a77379c 100644
+--- a/net/netfilter/nfnetlink_cthelper.c
++++ b/net/netfilter/nfnetlink_cthelper.c
+@@ -103,7 +103,7 @@ nfnl_cthelper_from_nlattr(struct nlattr *attr, struct nf_conn *ct)
+ 	if (help->helper->data_len == 0)
+ 		return -EINVAL;
+ 
+-	nla_memcpy(help->data, nla_data(attr), sizeof(help->data));
++	nla_memcpy(help->data, attr, sizeof(help->data));
+ 	return 0;
+ }
+ 
+@@ -240,6 +240,7 @@ nfnl_cthelper_create(const struct nlattr * const tb[],
+ 		ret = -ENOMEM;
+ 		goto err2;
+ 	}
++	helper->data_len = size;
+ 
+ 	helper->flags |= NF_CT_HELPER_F_USERSPACE;
+ 	memcpy(&helper->tuple, tuple, sizeof(struct nf_conntrack_tuple));
+diff --git a/net/qrtr/qrtr.c b/net/qrtr/qrtr.c
+index b7b854621c26..9d38c14d251a 100644
+--- a/net/qrtr/qrtr.c
++++ b/net/qrtr/qrtr.c
+@@ -855,7 +855,7 @@ static int qrtr_bcast_enqueue(struct qrtr_node *node, struct sk_buff *skb,
+ 	}
+ 	mutex_unlock(&qrtr_node_lock);
+ 
+-	qrtr_local_enqueue(node, skb, type, from, to);
++	qrtr_local_enqueue(NULL, skb, type, from, to);
+ 
+ 	return 0;
+ }
+diff --git a/net/sctp/sm_sideeffect.c b/net/sctp/sm_sideeffect.c
+index 2bc29463e1dc..9f36fe911d08 100644
+--- a/net/sctp/sm_sideeffect.c
++++ b/net/sctp/sm_sideeffect.c
+@@ -1523,9 +1523,17 @@ static int sctp_cmd_interpreter(enum sctp_event_type event_type,
+ 			timeout = asoc->timeouts[cmd->obj.to];
+ 			BUG_ON(!timeout);
+ 
+-			timer->expires = jiffies + timeout;
+-			sctp_association_hold(asoc);
+-			add_timer(timer);
++			/*
++			 * SCTP has a hard time with timer starts.  Because we process
++			 * timer starts as side effects, it can be hard to tell if we
++			 * have already started a timer or not, which leads to BUG
++			 * halts when we call add_timer. So here, instead of just starting
++			 * a timer, if the timer is already started, and just mod
++			 * the timer with the shorter of the two expiration times
++			 */
++			if (!timer_pending(timer))
++				sctp_association_hold(asoc);
++			timer_reduce(timer, jiffies + timeout);
+ 			break;
+ 
+ 		case SCTP_CMD_TIMER_RESTART:
+diff --git a/net/sctp/sm_statefuns.c b/net/sctp/sm_statefuns.c
+index 26788f4a3b9e..e86620fbd90f 100644
+--- a/net/sctp/sm_statefuns.c
++++ b/net/sctp/sm_statefuns.c
+@@ -1856,12 +1856,13 @@ static enum sctp_disposition sctp_sf_do_dupcook_a(
+ 	/* Update the content of current association. */
+ 	sctp_add_cmd_sf(commands, SCTP_CMD_UPDATE_ASSOC, SCTP_ASOC(new_asoc));
+ 	sctp_add_cmd_sf(commands, SCTP_CMD_EVENT_ULP, SCTP_ULPEVENT(ev));
+-	if (sctp_state(asoc, SHUTDOWN_PENDING) &&
++	if ((sctp_state(asoc, SHUTDOWN_PENDING) ||
++	     sctp_state(asoc, SHUTDOWN_SENT)) &&
+ 	    (sctp_sstate(asoc->base.sk, CLOSING) ||
+ 	     sock_flag(asoc->base.sk, SOCK_DEAD))) {
+-		/* if were currently in SHUTDOWN_PENDING, but the socket
+-		 * has been closed by user, don't transition to ESTABLISHED.
+-		 * Instead trigger SHUTDOWN bundled with COOKIE_ACK.
++		/* If the socket has been closed by user, don't
++		 * transition to ESTABLISHED. Instead trigger SHUTDOWN
++		 * bundled with COOKIE_ACK.
+ 		 */
+ 		sctp_add_cmd_sf(commands, SCTP_CMD_REPLY, SCTP_CHUNK(repl));
+ 		return sctp_sf_do_9_2_start_shutdown(net, ep, asoc,
+diff --git a/net/tipc/udp_media.c b/net/tipc/udp_media.c
+index d6620ad53546..28a283f26a8d 100644
+--- a/net/tipc/udp_media.c
++++ b/net/tipc/udp_media.c
+@@ -161,9 +161,11 @@ static int tipc_udp_xmit(struct net *net, struct sk_buff *skb,
+ 			 struct udp_bearer *ub, struct udp_media_addr *src,
+ 			 struct udp_media_addr *dst, struct dst_cache *cache)
+ {
+-	struct dst_entry *ndst = dst_cache_get(cache);
++	struct dst_entry *ndst;
+ 	int ttl, err = 0;
+ 
++	local_bh_disable();
++	ndst = dst_cache_get(cache);
+ 	if (dst->proto == htons(ETH_P_IP)) {
+ 		struct rtable *rt = (struct rtable *)ndst;
+ 
+@@ -210,9 +212,11 @@ static int tipc_udp_xmit(struct net *net, struct sk_buff *skb,
+ 					   src->port, dst->port, false);
+ #endif
+ 	}
++	local_bh_enable();
+ 	return err;
+ 
+ tx_error:
++	local_bh_enable();
+ 	kfree_skb(skb);
+ 	return err;
+ }
+diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c
+index e23f94a5549b..8c2763eb6aae 100644
+--- a/net/tls/tls_sw.c
++++ b/net/tls/tls_sw.c
+@@ -206,10 +206,12 @@ static void tls_decrypt_done(struct crypto_async_request *req, int err)
+ 
+ 	kfree(aead_req);
+ 
++	spin_lock_bh(&ctx->decrypt_compl_lock);
+ 	pending = atomic_dec_return(&ctx->decrypt_pending);
+ 
+-	if (!pending && READ_ONCE(ctx->async_notify))
++	if (!pending && ctx->async_notify)
+ 		complete(&ctx->async_wait.completion);
++	spin_unlock_bh(&ctx->decrypt_compl_lock);
+ }
+ 
+ static int tls_do_decryption(struct sock *sk,
+@@ -467,10 +469,12 @@ static void tls_encrypt_done(struct crypto_async_request *req, int err)
+ 			ready = true;
+ 	}
+ 
++	spin_lock_bh(&ctx->encrypt_compl_lock);
+ 	pending = atomic_dec_return(&ctx->encrypt_pending);
+ 
+-	if (!pending && READ_ONCE(ctx->async_notify))
++	if (!pending && ctx->async_notify)
+ 		complete(&ctx->async_wait.completion);
++	spin_unlock_bh(&ctx->encrypt_compl_lock);
+ 
+ 	if (!ready)
+ 		return;
+@@ -780,7 +784,7 @@ static int tls_push_record(struct sock *sk, int flags,
+ 
+ static int bpf_exec_tx_verdict(struct sk_msg *msg, struct sock *sk,
+ 			       bool full_record, u8 record_type,
+-			       size_t *copied, int flags)
++			       ssize_t *copied, int flags)
+ {
+ 	struct tls_context *tls_ctx = tls_get_ctx(sk);
+ 	struct tls_sw_context_tx *ctx = tls_sw_ctx_tx(tls_ctx);
+@@ -796,9 +800,10 @@ static int bpf_exec_tx_verdict(struct sk_msg *msg, struct sock *sk,
+ 	psock = sk_psock_get(sk);
+ 	if (!psock || !policy) {
+ 		err = tls_push_record(sk, flags, record_type);
+-		if (err && err != -EINPROGRESS) {
++		if (err && sk->sk_err == EBADMSG) {
+ 			*copied -= sk_msg_free(sk, msg);
+ 			tls_free_open_rec(sk);
++			err = -sk->sk_err;
+ 		}
+ 		if (psock)
+ 			sk_psock_put(sk, psock);
+@@ -824,9 +829,10 @@ more_data:
+ 	switch (psock->eval) {
+ 	case __SK_PASS:
+ 		err = tls_push_record(sk, flags, record_type);
+-		if (err && err != -EINPROGRESS) {
++		if (err && sk->sk_err == EBADMSG) {
+ 			*copied -= sk_msg_free(sk, msg);
+ 			tls_free_open_rec(sk);
++			err = -sk->sk_err;
+ 			goto out_err;
+ 		}
+ 		break;
+@@ -916,7 +922,8 @@ int tls_sw_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
+ 	unsigned char record_type = TLS_RECORD_TYPE_DATA;
+ 	bool is_kvec = iov_iter_is_kvec(&msg->msg_iter);
+ 	bool eor = !(msg->msg_flags & MSG_MORE);
+-	size_t try_to_copy, copied = 0;
++	size_t try_to_copy;
++	ssize_t copied = 0;
+ 	struct sk_msg *msg_pl, *msg_en;
+ 	struct tls_rec *rec;
+ 	int required_size;
+@@ -926,6 +933,7 @@ int tls_sw_sendmsg(struct sock *sk, struct msghdr *msg, size_t size)
+ 	int num_zc = 0;
+ 	int orig_size;
+ 	int ret = 0;
++	int pending;
+ 
+ 	if (msg->msg_flags & ~(MSG_MORE | MSG_DONTWAIT | MSG_NOSIGNAL))
+ 		return -EOPNOTSUPP;
+@@ -1092,13 +1100,19 @@ trim_sgl:
+ 		goto send_end;
+ 	} else if (num_zc) {
+ 		/* Wait for pending encryptions to get completed */
+-		smp_store_mb(ctx->async_notify, true);
++		spin_lock_bh(&ctx->encrypt_compl_lock);
++		ctx->async_notify = true;
+ 
+-		if (atomic_read(&ctx->encrypt_pending))
++		pending = atomic_read(&ctx->encrypt_pending);
++		spin_unlock_bh(&ctx->encrypt_compl_lock);
++		if (pending)
+ 			crypto_wait_req(-EINPROGRESS, &ctx->async_wait);
+ 		else
+ 			reinit_completion(&ctx->async_wait.completion);
+ 
++		/* There can be no concurrent accesses, since we have no
++		 * pending encrypt operations
++		 */
+ 		WRITE_ONCE(ctx->async_notify, false);
+ 
+ 		if (ctx->async_wait.err) {
+@@ -1118,7 +1132,7 @@ send_end:
+ 
+ 	release_sock(sk);
+ 	mutex_unlock(&tls_ctx->tx_lock);
+-	return copied ? copied : ret;
++	return copied > 0 ? copied : ret;
+ }
+ 
+ static int tls_sw_do_sendpage(struct sock *sk, struct page *page,
+@@ -1132,7 +1146,7 @@ static int tls_sw_do_sendpage(struct sock *sk, struct page *page,
+ 	struct sk_msg *msg_pl;
+ 	struct tls_rec *rec;
+ 	int num_async = 0;
+-	size_t copied = 0;
++	ssize_t copied = 0;
+ 	bool full_record;
+ 	int record_room;
+ 	int ret = 0;
+@@ -1234,7 +1248,7 @@ wait_for_memory:
+ 	}
+ sendpage_end:
+ 	ret = sk_stream_error(sk, flags, ret);
+-	return copied ? copied : ret;
++	return copied > 0 ? copied : ret;
+ }
+ 
+ int tls_sw_sendpage_locked(struct sock *sk, struct page *page,
+@@ -1729,6 +1743,7 @@ int tls_sw_recvmsg(struct sock *sk,
+ 	bool is_kvec = iov_iter_is_kvec(&msg->msg_iter);
+ 	bool is_peek = flags & MSG_PEEK;
+ 	int num_async = 0;
++	int pending;
+ 
+ 	flags |= nonblock;
+ 
+@@ -1891,8 +1906,11 @@ pick_next_record:
+ recv_end:
+ 	if (num_async) {
+ 		/* Wait for all previously submitted records to be decrypted */
+-		smp_store_mb(ctx->async_notify, true);
+-		if (atomic_read(&ctx->decrypt_pending)) {
++		spin_lock_bh(&ctx->decrypt_compl_lock);
++		ctx->async_notify = true;
++		pending = atomic_read(&ctx->decrypt_pending);
++		spin_unlock_bh(&ctx->decrypt_compl_lock);
++		if (pending) {
+ 			err = crypto_wait_req(-EINPROGRESS, &ctx->async_wait);
+ 			if (err) {
+ 				/* one of async decrypt failed */
+@@ -1904,6 +1922,10 @@ recv_end:
+ 		} else {
+ 			reinit_completion(&ctx->async_wait.completion);
+ 		}
++
++		/* There can be no concurrent accesses, since we have no
++		 * pending decrypt operations
++		 */
+ 		WRITE_ONCE(ctx->async_notify, false);
+ 
+ 		/* Drain records from the rx_list & copy if required */
+@@ -2290,6 +2312,7 @@ int tls_set_sw_offload(struct sock *sk, struct tls_context *ctx, int tx)
+ 
+ 	if (tx) {
+ 		crypto_init_wait(&sw_ctx_tx->async_wait);
++		spin_lock_init(&sw_ctx_tx->encrypt_compl_lock);
+ 		crypto_info = &ctx->crypto_send.info;
+ 		cctx = &ctx->tx;
+ 		aead = &sw_ctx_tx->aead_send;
+@@ -2298,6 +2321,7 @@ int tls_set_sw_offload(struct sock *sk, struct tls_context *ctx, int tx)
+ 		sw_ctx_tx->tx_work.sk = sk;
+ 	} else {
+ 		crypto_init_wait(&sw_ctx_rx->async_wait);
++		spin_lock_init(&sw_ctx_rx->decrypt_compl_lock);
+ 		crypto_info = &ctx->crypto_recv.info;
+ 		cctx = &ctx->rx;
+ 		skb_queue_head_init(&sw_ctx_rx->rx_list);
+diff --git a/net/wireless/core.c b/net/wireless/core.c
+index 3e25229a059d..ee5bb8d8af04 100644
+--- a/net/wireless/core.c
++++ b/net/wireless/core.c
+@@ -142,7 +142,7 @@ int cfg80211_dev_rename(struct cfg80211_registered_device *rdev,
+ 	if (result)
+ 		return result;
+ 
+-	if (rdev->wiphy.debugfsdir)
++	if (!IS_ERR_OR_NULL(rdev->wiphy.debugfsdir))
+ 		debugfs_rename(rdev->wiphy.debugfsdir->d_parent,
+ 			       rdev->wiphy.debugfsdir,
+ 			       rdev->wiphy.debugfsdir->d_parent, newname);
+diff --git a/net/xdp/xdp_umem.c b/net/xdp/xdp_umem.c
+index ed7a6060f73c..3889bd9aec46 100644
+--- a/net/xdp/xdp_umem.c
++++ b/net/xdp/xdp_umem.c
+@@ -341,8 +341,8 @@ static int xdp_umem_reg(struct xdp_umem *umem, struct xdp_umem_reg *mr)
+ {
+ 	bool unaligned_chunks = mr->flags & XDP_UMEM_UNALIGNED_CHUNK_FLAG;
+ 	u32 chunk_size = mr->chunk_size, headroom = mr->headroom;
++	u64 npgs, addr = mr->addr, size = mr->len;
+ 	unsigned int chunks, chunks_per_page;
+-	u64 addr = mr->addr, size = mr->len;
+ 	int err;
+ 
+ 	if (chunk_size < XDP_UMEM_MIN_CHUNK_SIZE || chunk_size > PAGE_SIZE) {
+@@ -372,6 +372,10 @@ static int xdp_umem_reg(struct xdp_umem *umem, struct xdp_umem_reg *mr)
+ 	if ((addr + size) < addr)
+ 		return -EINVAL;
+ 
++	npgs = div_u64(size, PAGE_SIZE);
++	if (npgs > U32_MAX)
++		return -EINVAL;
++
+ 	chunks = (unsigned int)div_u64(size, chunk_size);
+ 	if (chunks == 0)
+ 		return -EINVAL;
+@@ -391,7 +395,7 @@ static int xdp_umem_reg(struct xdp_umem *umem, struct xdp_umem_reg *mr)
+ 	umem->size = size;
+ 	umem->headroom = headroom;
+ 	umem->chunk_size_nohr = chunk_size - headroom;
+-	umem->npgs = size / PAGE_SIZE;
++	umem->npgs = (u32)npgs;
+ 	umem->pgs = NULL;
+ 	umem->user = NULL;
+ 	umem->flags = mr->flags;
+diff --git a/net/xfrm/espintcp.c b/net/xfrm/espintcp.c
+index f15d6a564b0e..36abb6750ffe 100644
+--- a/net/xfrm/espintcp.c
++++ b/net/xfrm/espintcp.c
+@@ -379,6 +379,7 @@ static void espintcp_destruct(struct sock *sk)
+ {
+ 	struct espintcp_ctx *ctx = espintcp_getctx(sk);
+ 
++	ctx->saved_destruct(sk);
+ 	kfree(ctx);
+ }
+ 
+@@ -419,6 +420,7 @@ static int espintcp_init_sk(struct sock *sk)
+ 	sk->sk_socket->ops = &espintcp_ops;
+ 	ctx->saved_data_ready = sk->sk_data_ready;
+ 	ctx->saved_write_space = sk->sk_write_space;
++	ctx->saved_destruct = sk->sk_destruct;
+ 	sk->sk_data_ready = espintcp_data_ready;
+ 	sk->sk_write_space = espintcp_write_space;
+ 	sk->sk_destruct = espintcp_destruct;
+diff --git a/net/xfrm/xfrm_device.c b/net/xfrm/xfrm_device.c
+index e2db468cf50e..4c1b939616b3 100644
+--- a/net/xfrm/xfrm_device.c
++++ b/net/xfrm/xfrm_device.c
+@@ -25,12 +25,10 @@ static void __xfrm_transport_prep(struct xfrm_state *x, struct sk_buff *skb,
+ 	struct xfrm_offload *xo = xfrm_offload(skb);
+ 
+ 	skb_reset_mac_len(skb);
+-	pskb_pull(skb, skb->mac_len + hsize + x->props.header_len);
+-
+-	if (xo->flags & XFRM_GSO_SEGMENT) {
+-		skb_reset_transport_header(skb);
++	if (xo->flags & XFRM_GSO_SEGMENT)
+ 		skb->transport_header -= x->props.header_len;
+-	}
++
++	pskb_pull(skb, skb_transport_offset(skb) + x->props.header_len);
+ }
+ 
+ static void __xfrm_mode_tunnel_prep(struct xfrm_state *x, struct sk_buff *skb,
+diff --git a/net/xfrm/xfrm_input.c b/net/xfrm/xfrm_input.c
+index aa35f23c4912..8a202c44f89a 100644
+--- a/net/xfrm/xfrm_input.c
++++ b/net/xfrm/xfrm_input.c
+@@ -644,7 +644,7 @@ resume:
+ 		dev_put(skb->dev);
+ 
+ 		spin_lock(&x->lock);
+-		if (nexthdr <= 0) {
++		if (nexthdr < 0) {
+ 			if (nexthdr == -EBADMSG) {
+ 				xfrm_audit_state_icvfail(x, skb,
+ 							 x->type->proto);
+diff --git a/net/xfrm/xfrm_interface.c b/net/xfrm/xfrm_interface.c
+index 3361e3ac5714..1e115cbf21d3 100644
+--- a/net/xfrm/xfrm_interface.c
++++ b/net/xfrm/xfrm_interface.c
+@@ -750,7 +750,28 @@ static struct rtnl_link_ops xfrmi_link_ops __read_mostly = {
+ 	.get_link_net	= xfrmi_get_link_net,
+ };
+ 
++static void __net_exit xfrmi_exit_batch_net(struct list_head *net_exit_list)
++{
++	struct net *net;
++	LIST_HEAD(list);
++
++	rtnl_lock();
++	list_for_each_entry(net, net_exit_list, exit_list) {
++		struct xfrmi_net *xfrmn = net_generic(net, xfrmi_net_id);
++		struct xfrm_if __rcu **xip;
++		struct xfrm_if *xi;
++
++		for (xip = &xfrmn->xfrmi[0];
++		     (xi = rtnl_dereference(*xip)) != NULL;
++		     xip = &xi->next)
++			unregister_netdevice_queue(xi->dev, &list);
++	}
++	unregister_netdevice_many(&list);
++	rtnl_unlock();
++}
++
+ static struct pernet_operations xfrmi_net_ops = {
++	.exit_batch = xfrmi_exit_batch_net,
+ 	.id   = &xfrmi_net_id,
+ 	.size = sizeof(struct xfrmi_net),
+ };
+diff --git a/net/xfrm/xfrm_output.c b/net/xfrm/xfrm_output.c
+index fafc7aba705f..d5f5a787ebbc 100644
+--- a/net/xfrm/xfrm_output.c
++++ b/net/xfrm/xfrm_output.c
+@@ -583,18 +583,20 @@ int xfrm_output(struct sock *sk, struct sk_buff *skb)
+ 		xfrm_state_hold(x);
+ 
+ 		if (skb_is_gso(skb)) {
+-			skb_shinfo(skb)->gso_type |= SKB_GSO_ESP;
++			if (skb->inner_protocol)
++				return xfrm_output_gso(net, sk, skb);
+ 
+-			return xfrm_output2(net, sk, skb);
++			skb_shinfo(skb)->gso_type |= SKB_GSO_ESP;
++			goto out;
+ 		}
+ 
+ 		if (x->xso.dev && x->xso.dev->features & NETIF_F_HW_ESP_TX_CSUM)
+ 			goto out;
++	} else {
++		if (skb_is_gso(skb))
++			return xfrm_output_gso(net, sk, skb);
+ 	}
+ 
+-	if (skb_is_gso(skb))
+-		return xfrm_output_gso(net, sk, skb);
+-
+ 	if (skb->ip_summed == CHECKSUM_PARTIAL) {
+ 		err = skb_checksum_help(skb);
+ 		if (err) {
+@@ -640,7 +642,8 @@ void xfrm_local_error(struct sk_buff *skb, int mtu)
+ 
+ 	if (skb->protocol == htons(ETH_P_IP))
+ 		proto = AF_INET;
+-	else if (skb->protocol == htons(ETH_P_IPV6))
++	else if (skb->protocol == htons(ETH_P_IPV6) &&
++		 skb->sk->sk_family == AF_INET6)
+ 		proto = AF_INET6;
+ 	else
+ 		return;
+diff --git a/net/xfrm/xfrm_policy.c b/net/xfrm/xfrm_policy.c
+index 8a4af86a285e..580735652754 100644
+--- a/net/xfrm/xfrm_policy.c
++++ b/net/xfrm/xfrm_policy.c
+@@ -1436,12 +1436,7 @@ static void xfrm_policy_requeue(struct xfrm_policy *old,
+ static bool xfrm_policy_mark_match(struct xfrm_policy *policy,
+ 				   struct xfrm_policy *pol)
+ {
+-	u32 mark = policy->mark.v & policy->mark.m;
+-
+-	if (policy->mark.v == pol->mark.v && policy->mark.m == pol->mark.m)
+-		return true;
+-
+-	if ((mark & pol->mark.m) == pol->mark.v &&
++	if (policy->mark.v == pol->mark.v &&
+ 	    policy->priority == pol->priority)
+ 		return true;
+ 
+diff --git a/samples/bpf/lwt_len_hist_user.c b/samples/bpf/lwt_len_hist_user.c
+index 587b68b1f8dd..430a4b7e353e 100644
+--- a/samples/bpf/lwt_len_hist_user.c
++++ b/samples/bpf/lwt_len_hist_user.c
+@@ -15,8 +15,6 @@
+ #define MAX_INDEX 64
+ #define MAX_STARS 38
+ 
+-char bpf_log_buf[BPF_LOG_BUF_SIZE];
+-
+ static void stars(char *str, long val, long max, int width)
+ {
+ 	int i;
+diff --git a/security/commoncap.c b/security/commoncap.c
+index f4ee0ae106b2..0ca31c8bc0b1 100644
+--- a/security/commoncap.c
++++ b/security/commoncap.c
+@@ -812,6 +812,7 @@ int cap_bprm_set_creds(struct linux_binprm *bprm)
+ 	int ret;
+ 	kuid_t root_uid;
+ 
++	new->cap_ambient = old->cap_ambient;
+ 	if (WARN_ON(!cap_ambient_invariant_ok(old)))
+ 		return -EPERM;
+ 
+diff --git a/sound/core/hwdep.c b/sound/core/hwdep.c
+index b412d3b3d5ff..21edb8ac95eb 100644
+--- a/sound/core/hwdep.c
++++ b/sound/core/hwdep.c
+@@ -216,12 +216,12 @@ static int snd_hwdep_dsp_load(struct snd_hwdep *hw,
+ 	if (info.index >= 32)
+ 		return -EINVAL;
+ 	/* check whether the dsp was already loaded */
+-	if (hw->dsp_loaded & (1 << info.index))
++	if (hw->dsp_loaded & (1u << info.index))
+ 		return -EBUSY;
+ 	err = hw->ops.dsp_load(hw, &info);
+ 	if (err < 0)
+ 		return err;
+-	hw->dsp_loaded |= (1 << info.index);
++	hw->dsp_loaded |= (1u << info.index);
+ 	return 0;
+ }
+ 
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index 041d2a32059b..e62d58872b6e 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -384,6 +384,7 @@ static void alc_fill_eapd_coef(struct hda_codec *codec)
+ 	case 0x10ec0282:
+ 	case 0x10ec0283:
+ 	case 0x10ec0286:
++	case 0x10ec0287:
+ 	case 0x10ec0288:
+ 	case 0x10ec0285:
+ 	case 0x10ec0298:
+@@ -5484,18 +5485,9 @@ static void alc_fixup_tpt470_dock(struct hda_codec *codec,
+ 		{ 0x19, 0x21a11010 }, /* dock mic */
+ 		{ }
+ 	};
+-	/* Assure the speaker pin to be coupled with DAC NID 0x03; otherwise
+-	 * the speaker output becomes too low by some reason on Thinkpads with
+-	 * ALC298 codec
+-	 */
+-	static const hda_nid_t preferred_pairs[] = {
+-		0x14, 0x03, 0x17, 0x02, 0x21, 0x02,
+-		0
+-	};
+ 	struct alc_spec *spec = codec->spec;
+ 
+ 	if (action == HDA_FIXUP_ACT_PRE_PROBE) {
+-		spec->gen.preferred_dacs = preferred_pairs;
+ 		spec->parse_flags = HDA_PINCFG_NO_HP_FIXUP;
+ 		snd_hda_apply_pincfgs(codec, pincfgs);
+ 	} else if (action == HDA_FIXUP_ACT_INIT) {
+@@ -5508,6 +5500,23 @@ static void alc_fixup_tpt470_dock(struct hda_codec *codec,
+ 	}
+ }
+ 
++static void alc_fixup_tpt470_dacs(struct hda_codec *codec,
++				  const struct hda_fixup *fix, int action)
++{
++	/* Assure the speaker pin to be coupled with DAC NID 0x03; otherwise
++	 * the speaker output becomes too low by some reason on Thinkpads with
++	 * ALC298 codec
++	 */
++	static const hda_nid_t preferred_pairs[] = {
++		0x14, 0x03, 0x17, 0x02, 0x21, 0x02,
++		0
++	};
++	struct alc_spec *spec = codec->spec;
++
++	if (action == HDA_FIXUP_ACT_PRE_PROBE)
++		spec->gen.preferred_dacs = preferred_pairs;
++}
++
+ static void alc_shutup_dell_xps13(struct hda_codec *codec)
+ {
+ 	struct alc_spec *spec = codec->spec;
+@@ -6063,6 +6072,7 @@ enum {
+ 	ALC700_FIXUP_INTEL_REFERENCE,
+ 	ALC274_FIXUP_DELL_BIND_DACS,
+ 	ALC274_FIXUP_DELL_AIO_LINEOUT_VERB,
++	ALC298_FIXUP_TPT470_DOCK_FIX,
+ 	ALC298_FIXUP_TPT470_DOCK,
+ 	ALC255_FIXUP_DUMMY_LINEOUT_VERB,
+ 	ALC255_FIXUP_DELL_HEADSET_MIC,
+@@ -6994,12 +7004,18 @@ static const struct hda_fixup alc269_fixups[] = {
+ 		.chained = true,
+ 		.chain_id = ALC274_FIXUP_DELL_BIND_DACS
+ 	},
+-	[ALC298_FIXUP_TPT470_DOCK] = {
++	[ALC298_FIXUP_TPT470_DOCK_FIX] = {
+ 		.type = HDA_FIXUP_FUNC,
+ 		.v.func = alc_fixup_tpt470_dock,
+ 		.chained = true,
+ 		.chain_id = ALC293_FIXUP_LENOVO_SPK_NOISE
+ 	},
++	[ALC298_FIXUP_TPT470_DOCK] = {
++		.type = HDA_FIXUP_FUNC,
++		.v.func = alc_fixup_tpt470_dacs,
++		.chained = true,
++		.chain_id = ALC298_FIXUP_TPT470_DOCK_FIX
++	},
+ 	[ALC255_FIXUP_DUMMY_LINEOUT_VERB] = {
+ 		.type = HDA_FIXUP_PINS,
+ 		.v.pins = (const struct hda_pintbl[]) {
+@@ -7638,6 +7654,7 @@ static const struct hda_model_fixup alc269_fixup_models[] = {
+ 	{.id = ALC292_FIXUP_TPT440_DOCK, .name = "tpt440-dock"},
+ 	{.id = ALC292_FIXUP_TPT440, .name = "tpt440"},
+ 	{.id = ALC292_FIXUP_TPT460, .name = "tpt460"},
++	{.id = ALC298_FIXUP_TPT470_DOCK_FIX, .name = "tpt470-dock-fix"},
+ 	{.id = ALC298_FIXUP_TPT470_DOCK, .name = "tpt470-dock"},
+ 	{.id = ALC233_FIXUP_LENOVO_MULTI_CODECS, .name = "dual-codecs"},
+ 	{.id = ALC700_FIXUP_INTEL_REFERENCE, .name = "alc700-ref"},
+@@ -8276,6 +8293,7 @@ static int patch_alc269(struct hda_codec *codec)
+ 	case 0x10ec0215:
+ 	case 0x10ec0245:
+ 	case 0x10ec0285:
++	case 0x10ec0287:
+ 	case 0x10ec0289:
+ 		spec->codec_variant = ALC269_TYPE_ALC215;
+ 		spec->shutup = alc225_shutup;
+@@ -9554,6 +9572,7 @@ static const struct hda_device_id snd_hda_id_realtek[] = {
+ 	HDA_CODEC_ENTRY(0x10ec0284, "ALC284", patch_alc269),
+ 	HDA_CODEC_ENTRY(0x10ec0285, "ALC285", patch_alc269),
+ 	HDA_CODEC_ENTRY(0x10ec0286, "ALC286", patch_alc269),
++	HDA_CODEC_ENTRY(0x10ec0287, "ALC287", patch_alc269),
+ 	HDA_CODEC_ENTRY(0x10ec0288, "ALC288", patch_alc269),
+ 	HDA_CODEC_ENTRY(0x10ec0289, "ALC289", patch_alc269),
+ 	HDA_CODEC_ENTRY(0x10ec0290, "ALC290", patch_alc269),
+diff --git a/sound/usb/mixer.c b/sound/usb/mixer.c
+index 7a2961ad60de..68fefe55e5c0 100644
+--- a/sound/usb/mixer.c
++++ b/sound/usb/mixer.c
+@@ -1171,6 +1171,14 @@ static void volume_control_quirks(struct usb_mixer_elem_info *cval,
+ 			cval->res = 384;
+ 		}
+ 		break;
++	case USB_ID(0x0495, 0x3042): /* ESS Technology Asus USB DAC */
++		if ((strstr(kctl->id.name, "Playback Volume") != NULL) ||
++			strstr(kctl->id.name, "Capture Volume") != NULL) {
++			cval->min >>= 8;
++			cval->max = 0;
++			cval->res = 1;
++		}
++		break;
+ 	}
+ }
+ 
+diff --git a/sound/usb/mixer_maps.c b/sound/usb/mixer_maps.c
+index 0260c750e156..9af7aa93f6fa 100644
+--- a/sound/usb/mixer_maps.c
++++ b/sound/usb/mixer_maps.c
+@@ -397,6 +397,21 @@ static const struct usbmix_connector_map trx40_mobo_connector_map[] = {
+ 	{}
+ };
+ 
++/* Rear panel + front mic on Gigabyte TRX40 Aorus Master with ALC1220-VB */
++static const struct usbmix_name_map aorus_master_alc1220vb_map[] = {
++	{ 17, NULL },			/* OT, IEC958?, disabled */
++	{ 19, NULL, 12 }, /* FU, Input Gain Pad - broken response, disabled */
++	{ 16, "Line Out" },		/* OT */
++	{ 22, "Line Out Playback" },	/* FU */
++	{ 7, "Line" },			/* IT */
++	{ 19, "Line Capture" },		/* FU */
++	{ 8, "Mic" },			/* IT */
++	{ 20, "Mic Capture" },		/* FU */
++	{ 9, "Front Mic" },		/* IT */
++	{ 21, "Front Mic Capture" },	/* FU */
++	{}
++};
++
+ /*
+  * Control map entries
+  */
+@@ -526,6 +541,10 @@ static const struct usbmix_ctl_map usbmix_ctl_maps[] = {
+ 		.id = USB_ID(0x1b1c, 0x0a42),
+ 		.map = corsair_virtuoso_map,
+ 	},
++	{	/* Gigabyte TRX40 Aorus Master (rear panel + front mic) */
++		.id = USB_ID(0x0414, 0xa001),
++		.map = aorus_master_alc1220vb_map,
++	},
+ 	{	/* Gigabyte TRX40 Aorus Pro WiFi */
+ 		.id = USB_ID(0x0414, 0xa002),
+ 		.map = trx40_mobo_map,
+@@ -549,6 +568,11 @@ static const struct usbmix_ctl_map usbmix_ctl_maps[] = {
+ 		.map = trx40_mobo_map,
+ 		.connector_map = trx40_mobo_connector_map,
+ 	},
++	{	/* Asrock TRX40 Creator */
++		.id = USB_ID(0x26ce, 0x0a01),
++		.map = trx40_mobo_map,
++		.connector_map = trx40_mobo_connector_map,
++	},
+ 	{ 0 } /* terminator */
+ };
+ 
+diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h
+index 8c2f5c23e1b4..bbae11605a4c 100644
+--- a/sound/usb/quirks-table.h
++++ b/sound/usb/quirks-table.h
+@@ -3647,6 +3647,32 @@ AU0828_DEVICE(0x2040, 0x7270, "Hauppauge", "HVR-950Q"),
+ ALC1220_VB_DESKTOP(0x0414, 0xa002), /* Gigabyte TRX40 Aorus Pro WiFi */
+ ALC1220_VB_DESKTOP(0x0db0, 0x0d64), /* MSI TRX40 Creator */
+ ALC1220_VB_DESKTOP(0x0db0, 0x543d), /* MSI TRX40 */
++ALC1220_VB_DESKTOP(0x26ce, 0x0a01), /* Asrock TRX40 Creator */
+ #undef ALC1220_VB_DESKTOP
+ 
++/* Two entries for Gigabyte TRX40 Aorus Master:
++ * TRX40 Aorus Master has two USB-audio devices, one for the front headphone
++ * with ESS SABRE9218 DAC chip, while another for the rest I/O (the rear
++ * panel and the front mic) with Realtek ALC1220-VB.
++ * Here we provide two distinct names for making UCM profiles easier.
++ */
++{
++	USB_DEVICE(0x0414, 0xa000),
++	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++		.vendor_name = "Gigabyte",
++		.product_name = "Aorus Master Front Headphone",
++		.profile_name = "Gigabyte-Aorus-Master-Front-Headphone",
++		.ifnum = QUIRK_NO_INTERFACE
++	}
++},
++{
++	USB_DEVICE(0x0414, 0xa001),
++	.driver_info = (unsigned long) & (const struct snd_usb_audio_quirk) {
++		.vendor_name = "Gigabyte",
++		.product_name = "Aorus Master Main Audio",
++		.profile_name = "Gigabyte-Aorus-Master-Main-Audio",
++		.ifnum = QUIRK_NO_INTERFACE
++	}
++},
++
+ #undef USB_DEVICE_VENDOR_SPEC
+diff --git a/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c b/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c
+index cd5e1f602ac9..909da9cdda97 100644
+--- a/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c
++++ b/tools/testing/selftests/dmabuf-heaps/dmabuf-heap.c
+@@ -351,6 +351,7 @@ static int test_alloc_errors(char *heap_name)
+ 	}
+ 
+ 	printf("Expected error checking passed\n");
++	ret = 0;
+ out:
+ 	if (dmabuf_fd >= 0)
+ 		close(dmabuf_fd);


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [gentoo-commits] proj/linux-patches:5.6 commit in: /
@ 2020-06-07 21:54 Mike Pagano
  0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2020-06-07 21:54 UTC (permalink / raw
  To: gentoo-commits

commit:     e989bdfcc48b199cf8faa0375950668a3b8e5ec5
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Sun Jun  7 21:54:35 2020 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Sun Jun  7 21:54:35 2020 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=e989bdfc

Linux patch 5.6.17

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1016_linux-5.6.17.patch | 1227 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1231 insertions(+)

diff --git a/0000_README b/0000_README
index eb1d2c7..07595c4 100644
--- a/0000_README
+++ b/0000_README
@@ -107,6 +107,10 @@ Patch:  1015_linux-5.6.16.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.6.16
 
+Patch:  1016_linux-5.6.17.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.6.17
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1016_linux-5.6.17.patch b/1016_linux-5.6.17.patch
new file mode 100644
index 0000000..698ce9a
--- /dev/null
+++ b/1016_linux-5.6.17.patch
@@ -0,0 +1,1227 @@
+diff --git a/Makefile b/Makefile
+index 1befb37dcc58..8254beb87a7b 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 6
+-SUBLEVEL = 16
++SUBLEVEL = 17
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+ 
+diff --git a/arch/arc/kernel/setup.c b/arch/arc/kernel/setup.c
+index aa41af6ef4ac..efdedf83b954 100644
+--- a/arch/arc/kernel/setup.c
++++ b/arch/arc/kernel/setup.c
+@@ -11,6 +11,7 @@
+ #include <linux/clocksource.h>
+ #include <linux/console.h>
+ #include <linux/module.h>
++#include <linux/sizes.h>
+ #include <linux/cpu.h>
+ #include <linux/of_clk.h>
+ #include <linux/of_fdt.h>
+@@ -409,12 +410,12 @@ static void arc_chk_core_config(void)
+ 	if ((unsigned int)__arc_dccm_base != cpu->dccm.base_addr)
+ 		panic("Linux built with incorrect DCCM Base address\n");
+ 
+-	if (CONFIG_ARC_DCCM_SZ != cpu->dccm.sz)
++	if (CONFIG_ARC_DCCM_SZ * SZ_1K != cpu->dccm.sz)
+ 		panic("Linux built with incorrect DCCM Size\n");
+ #endif
+ 
+ #ifdef CONFIG_ARC_HAS_ICCM
+-	if (CONFIG_ARC_ICCM_SZ != cpu->iccm.sz)
++	if (CONFIG_ARC_ICCM_SZ * SZ_1K != cpu->iccm.sz)
+ 		panic("Linux built with incorrect ICCM Size\n");
+ #endif
+ 
+diff --git a/arch/arc/plat-eznps/Kconfig b/arch/arc/plat-eznps/Kconfig
+index a931d0a256d0..a645bca5899a 100644
+--- a/arch/arc/plat-eznps/Kconfig
++++ b/arch/arc/plat-eznps/Kconfig
+@@ -6,6 +6,7 @@
+ 
+ menuconfig ARC_PLAT_EZNPS
+ 	bool "\"EZchip\" ARC dev platform"
++	depends on ISA_ARCOMPACT
+ 	select CPU_BIG_ENDIAN
+ 	select CLKSRC_NPS if !PHYS_ADDR_T_64BIT
+ 	select EZNPS_GIC
+diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
+index 157924baa191..1dc26384a6c4 100644
+--- a/arch/riscv/mm/init.c
++++ b/arch/riscv/mm/init.c
+@@ -46,7 +46,7 @@ static void setup_zero_page(void)
+ 	memset((void *)empty_zero_page, 0, PAGE_SIZE);
+ }
+ 
+-#ifdef CONFIG_DEBUG_VM
++#if defined(CONFIG_MMU) && defined(CONFIG_DEBUG_VM)
+ static inline void print_mlk(char *name, unsigned long b, unsigned long t)
+ {
+ 	pr_notice("%12s : 0x%08lx - 0x%08lx   (%4ld kB)\n", name, b, t,
+diff --git a/arch/s390/mm/hugetlbpage.c b/arch/s390/mm/hugetlbpage.c
+index 5674710a4841..7dfae86afa47 100644
+--- a/arch/s390/mm/hugetlbpage.c
++++ b/arch/s390/mm/hugetlbpage.c
+@@ -159,10 +159,13 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
+ 		rste &= ~_SEGMENT_ENTRY_NOEXEC;
+ 
+ 	/* Set correct table type for 2G hugepages */
+-	if ((pte_val(*ptep) & _REGION_ENTRY_TYPE_MASK) == _REGION_ENTRY_TYPE_R3)
+-		rste |= _REGION_ENTRY_TYPE_R3 | _REGION3_ENTRY_LARGE;
+-	else
++	if ((pte_val(*ptep) & _REGION_ENTRY_TYPE_MASK) == _REGION_ENTRY_TYPE_R3) {
++		if (likely(pte_present(pte)))
++			rste |= _REGION3_ENTRY_LARGE;
++		rste |= _REGION_ENTRY_TYPE_R3;
++	} else if (likely(pte_present(pte)))
+ 		rste |= _SEGMENT_ENTRY_LARGE;
++
+ 	clear_huge_pte_skeys(mm, rste);
+ 	pte_val(*ptep) = rste;
+ }
+diff --git a/arch/x86/hyperv/hv_init.c b/arch/x86/hyperv/hv_init.c
+index fd51bac11b46..acf76b466db6 100644
+--- a/arch/x86/hyperv/hv_init.c
++++ b/arch/x86/hyperv/hv_init.c
+@@ -226,10 +226,18 @@ static int hv_cpu_die(unsigned int cpu)
+ 
+ 	rdmsrl(HV_X64_MSR_REENLIGHTENMENT_CONTROL, *((u64 *)&re_ctrl));
+ 	if (re_ctrl.target_vp == hv_vp_index[cpu]) {
+-		/* Reassign to some other online CPU */
++		/*
++		 * Reassign reenlightenment notifications to some other online
++		 * CPU or just disable the feature if there are no online CPUs
++		 * left (happens on hibernation).
++		 */
+ 		new_cpu = cpumask_any_but(cpu_online_mask, cpu);
+ 
+-		re_ctrl.target_vp = hv_vp_index[new_cpu];
++		if (new_cpu < nr_cpu_ids)
++			re_ctrl.target_vp = hv_vp_index[new_cpu];
++		else
++			re_ctrl.enabled = 0;
++
+ 		wrmsrl(HV_X64_MSR_REENLIGHTENMENT_CONTROL, *((u64 *)&re_ctrl));
+ 	}
+ 
+@@ -293,6 +301,13 @@ static void hv_resume(void)
+ 
+ 	hv_hypercall_pg = hv_hypercall_pg_saved;
+ 	hv_hypercall_pg_saved = NULL;
++
++	/*
++	 * Reenlightenment notifications are disabled by hv_cpu_die(0),
++	 * reenable them here if hv_reenlightenment_cb was previously set.
++	 */
++	if (hv_reenlightenment_cb)
++		set_hv_tscchange_cb(hv_reenlightenment_cb);
+ }
+ 
+ /* Note: when the ops are called, only CPU0 is online and IRQs are disabled. */
+diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
+index 64a03f226ab7..dca64a2dda9c 100644
+--- a/arch/x86/include/asm/pgtable.h
++++ b/arch/x86/include/asm/pgtable.h
+@@ -256,6 +256,7 @@ static inline int pmd_large(pmd_t pte)
+ }
+ 
+ #ifdef CONFIG_TRANSPARENT_HUGEPAGE
++/* NOTE: when predicate huge page, consider also pmd_devmap, or use pmd_large */
+ static inline int pmd_trans_huge(pmd_t pmd)
+ {
+ 	return (pmd_val(pmd) & (_PAGE_PSE|_PAGE_DEVMAP)) == _PAGE_PSE;
+diff --git a/arch/x86/include/uapi/asm/unistd.h b/arch/x86/include/uapi/asm/unistd.h
+index 196fdd02b8b1..be5e2e747f50 100644
+--- a/arch/x86/include/uapi/asm/unistd.h
++++ b/arch/x86/include/uapi/asm/unistd.h
+@@ -2,8 +2,15 @@
+ #ifndef _UAPI_ASM_X86_UNISTD_H
+ #define _UAPI_ASM_X86_UNISTD_H
+ 
+-/* x32 syscall flag bit */
+-#define __X32_SYSCALL_BIT	0x40000000UL
++/*
++ * x32 syscall flag bit.  Some user programs expect syscall NR macros
++ * and __X32_SYSCALL_BIT to have type int, even though syscall numbers
++ * are, for practical purposes, unsigned long.
++ *
++ * Fortunately, expressions like (nr & ~__X32_SYSCALL_BIT) do the right
++ * thing regardless.
++ */
++#define __X32_SYSCALL_BIT	0x40000000
+ 
+ #ifndef __KERNEL__
+ # ifdef __i386__
+diff --git a/arch/x86/mm/mmio-mod.c b/arch/x86/mm/mmio-mod.c
+index 673de6063345..92530af38b09 100644
+--- a/arch/x86/mm/mmio-mod.c
++++ b/arch/x86/mm/mmio-mod.c
+@@ -372,7 +372,7 @@ static void enter_uniprocessor(void)
+ 	int cpu;
+ 	int err;
+ 
+-	if (downed_cpus == NULL &&
++	if (!cpumask_available(downed_cpus) &&
+ 	    !alloc_cpumask_var(&downed_cpus, GFP_KERNEL)) {
+ 		pr_notice("Failed to allocate mask\n");
+ 		goto out;
+@@ -402,7 +402,7 @@ static void leave_uniprocessor(void)
+ 	int cpu;
+ 	int err;
+ 
+-	if (downed_cpus == NULL || cpumask_weight(downed_cpus) == 0)
++	if (!cpumask_available(downed_cpus) || cpumask_weight(downed_cpus) == 0)
+ 		return;
+ 	pr_notice("Re-enabling CPUs...\n");
+ 	for_each_cpu(cpu, downed_cpus) {
+diff --git a/crypto/algapi.c b/crypto/algapi.c
+index 69605e21af92..f8b4dc161c02 100644
+--- a/crypto/algapi.c
++++ b/crypto/algapi.c
+@@ -716,17 +716,27 @@ EXPORT_SYMBOL_GPL(crypto_drop_spawn);
+ 
+ static struct crypto_alg *crypto_spawn_alg(struct crypto_spawn *spawn)
+ {
+-	struct crypto_alg *alg;
++	struct crypto_alg *alg = ERR_PTR(-EAGAIN);
++	struct crypto_alg *target;
++	bool shoot = false;
+ 
+ 	down_read(&crypto_alg_sem);
+-	alg = spawn->alg;
+-	if (!spawn->dead && !crypto_mod_get(alg)) {
+-		alg->cra_flags |= CRYPTO_ALG_DYING;
+-		alg = NULL;
++	if (!spawn->dead) {
++		alg = spawn->alg;
++		if (!crypto_mod_get(alg)) {
++			target = crypto_alg_get(alg);
++			shoot = true;
++			alg = ERR_PTR(-EAGAIN);
++		}
+ 	}
+ 	up_read(&crypto_alg_sem);
+ 
+-	return alg ?: ERR_PTR(-EAGAIN);
++	if (shoot) {
++		crypto_shoot_alg(target);
++		crypto_alg_put(target);
++	}
++
++	return alg;
+ }
+ 
+ struct crypto_tfm *crypto_spawn_tfm(struct crypto_spawn *spawn, u32 type,
+diff --git a/crypto/api.c b/crypto/api.c
+index 7d71a9b10e5f..edcf690800d4 100644
+--- a/crypto/api.c
++++ b/crypto/api.c
+@@ -333,12 +333,13 @@ static unsigned int crypto_ctxsize(struct crypto_alg *alg, u32 type, u32 mask)
+ 	return len;
+ }
+ 
+-static void crypto_shoot_alg(struct crypto_alg *alg)
++void crypto_shoot_alg(struct crypto_alg *alg)
+ {
+ 	down_write(&crypto_alg_sem);
+ 	alg->cra_flags |= CRYPTO_ALG_DYING;
+ 	up_write(&crypto_alg_sem);
+ }
++EXPORT_SYMBOL_GPL(crypto_shoot_alg);
+ 
+ struct crypto_tfm *__crypto_alloc_tfm(struct crypto_alg *alg, u32 type,
+ 				      u32 mask)
+diff --git a/crypto/internal.h b/crypto/internal.h
+index d5ebc60c5143..ff06a3bd1ca1 100644
+--- a/crypto/internal.h
++++ b/crypto/internal.h
+@@ -65,6 +65,7 @@ void crypto_alg_tested(const char *name, int err);
+ void crypto_remove_spawns(struct crypto_alg *alg, struct list_head *list,
+ 			  struct crypto_alg *nalg);
+ void crypto_remove_final(struct list_head *list);
++void crypto_shoot_alg(struct crypto_alg *alg);
+ struct crypto_tfm *__crypto_alloc_tfm(struct crypto_alg *alg, u32 type,
+ 				      u32 mask);
+ void *crypto_create_tfm(struct crypto_alg *alg,
+diff --git a/drivers/block/null_blk_zoned.c b/drivers/block/null_blk_zoned.c
+index ed34785dd64b..5dc955f5ea0a 100644
+--- a/drivers/block/null_blk_zoned.c
++++ b/drivers/block/null_blk_zoned.c
+@@ -20,6 +20,10 @@ int null_zone_init(struct nullb_device *dev)
+ 		pr_err("zone_size must be power-of-two\n");
+ 		return -EINVAL;
+ 	}
++	if (dev->zone_size > dev->size) {
++		pr_err("Zone size larger than device capacity\n");
++		return -EINVAL;
++	}
+ 
+ 	dev->zone_size_sects = dev->zone_size << ZONE_SIZE_SHIFT;
+ 	dev->nr_zones = dev_size >>
+diff --git a/drivers/dma/ti/k3-udma.c b/drivers/dma/ti/k3-udma.c
+index 0536866a58ce..4bfbca2add1b 100644
+--- a/drivers/dma/ti/k3-udma.c
++++ b/drivers/dma/ti/k3-udma.c
+@@ -2148,7 +2148,8 @@ udma_prep_slave_sg_tr(struct udma_chan *uc, struct scatterlist *sgl,
+ 		d->residue += sg_dma_len(sgent);
+ 	}
+ 
+-	cppi5_tr_csf_set(&tr_req[tr_idx - 1].flags, CPPI5_TR_CSF_EOP);
++	cppi5_tr_csf_set(&tr_req[tr_idx - 1].flags,
++			 CPPI5_TR_CSF_SUPR_EVT | CPPI5_TR_CSF_EOP);
+ 
+ 	return d;
+ }
+@@ -2725,7 +2726,8 @@ udma_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src,
+ 		tr_req[1].dicnt3 = 1;
+ 	}
+ 
+-	cppi5_tr_csf_set(&tr_req[num_tr - 1].flags, CPPI5_TR_CSF_EOP);
++	cppi5_tr_csf_set(&tr_req[num_tr - 1].flags,
++			 CPPI5_TR_CSF_SUPR_EVT | CPPI5_TR_CSF_EOP);
+ 
+ 	if (uc->config.metadata_size)
+ 		d->vd.tx.metadata_ops = &metadata_ops;
+diff --git a/drivers/firmware/efi/earlycon.c b/drivers/firmware/efi/earlycon.c
+index 5d4f84781aa0..a52236e11e5f 100644
+--- a/drivers/firmware/efi/earlycon.c
++++ b/drivers/firmware/efi/earlycon.c
+@@ -114,14 +114,16 @@ static void efi_earlycon_write_char(u32 *dst, unsigned char c, unsigned int h)
+ 	const u32 color_black = 0x00000000;
+ 	const u32 color_white = 0x00ffffff;
+ 	const u8 *src;
+-	u8 s8;
+-	int m;
++	int m, n, bytes;
++	u8 x;
+ 
+-	src = font->data + c * font->height;
+-	s8 = *(src + h);
++	bytes = BITS_TO_BYTES(font->width);
++	src = font->data + c * font->height * bytes + h * bytes;
+ 
+-	for (m = 0; m < 8; m++) {
+-		if ((s8 >> (7 - m)) & 1)
++	for (m = 0; m < font->width; m++) {
++		n = m % 8;
++		x = *(src + m / 8);
++		if ((x >> (7 - n)) & 1)
+ 			*dst = color_white;
+ 		else
+ 			*dst = color_black;
+diff --git a/drivers/firmware/efi/libstub/arm-stub.c b/drivers/firmware/efi/libstub/arm-stub.c
+index 7bbef4a67350..30e77a9e62b2 100644
+--- a/drivers/firmware/efi/libstub/arm-stub.c
++++ b/drivers/firmware/efi/libstub/arm-stub.c
+@@ -59,7 +59,11 @@ static struct screen_info *setup_graphics(void)
+ 		si = alloc_screen_info();
+ 		if (!si)
+ 			return NULL;
+-		efi_setup_gop(si, &gop_proto, size);
++		status = efi_setup_gop(si, &gop_proto, size);
++		if (status != EFI_SUCCESS) {
++			free_screen_info(si);
++			return NULL;
++		}
+ 	}
+ 	return si;
+ }
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+index 1b6c75a4dd60..fbcd979438e2 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c
+@@ -220,6 +220,30 @@ static enum dpcd_training_patterns
+ 	return dpcd_tr_pattern;
+ }
+ 
++static uint8_t dc_dp_initialize_scrambling_data_symbols(
++	struct dc_link *link,
++	enum dc_dp_training_pattern pattern)
++{
++	uint8_t disable_scrabled_data_symbols = 0;
++
++	switch (pattern) {
++	case DP_TRAINING_PATTERN_SEQUENCE_1:
++	case DP_TRAINING_PATTERN_SEQUENCE_2:
++	case DP_TRAINING_PATTERN_SEQUENCE_3:
++		disable_scrabled_data_symbols = 1;
++		break;
++	case DP_TRAINING_PATTERN_SEQUENCE_4:
++		disable_scrabled_data_symbols = 0;
++		break;
++	default:
++		ASSERT(0);
++		DC_LOG_HW_LINK_TRAINING("%s: Invalid HW Training pattern: %d\n",
++			__func__, pattern);
++		break;
++	}
++	return disable_scrabled_data_symbols;
++}
++
+ static inline bool is_repeater(struct dc_link *link, uint32_t offset)
+ {
+ 	return (!link->is_lttpr_mode_transparent && offset != 0);
+@@ -252,6 +276,9 @@ static void dpcd_set_lt_pattern_and_lane_settings(
+ 	dpcd_pattern.v1_4.TRAINING_PATTERN_SET =
+ 		dc_dp_training_pattern_to_dpcd_training_pattern(link, pattern);
+ 
++	dpcd_pattern.v1_4.SCRAMBLING_DISABLE =
++		dc_dp_initialize_scrambling_data_symbols(link, pattern);
++
+ 	dpcd_lt_buffer[DP_TRAINING_PATTERN_SET - DP_TRAINING_PATTERN_SET]
+ 		= dpcd_pattern.raw;
+ 
+diff --git a/drivers/gpu/drm/drm_edid.c b/drivers/gpu/drm/drm_edid.c
+index 079800a07d6e..5c611baba2fc 100644
+--- a/drivers/gpu/drm/drm_edid.c
++++ b/drivers/gpu/drm/drm_edid.c
+@@ -191,10 +191,11 @@ static const struct edid_quirk {
+ 	{ "HVR", 0xaa01, EDID_QUIRK_NON_DESKTOP },
+ 	{ "HVR", 0xaa02, EDID_QUIRK_NON_DESKTOP },
+ 
+-	/* Oculus Rift DK1, DK2, and CV1 VR Headsets */
++	/* Oculus Rift DK1, DK2, CV1 and Rift S VR Headsets */
+ 	{ "OVR", 0x0001, EDID_QUIRK_NON_DESKTOP },
+ 	{ "OVR", 0x0003, EDID_QUIRK_NON_DESKTOP },
+ 	{ "OVR", 0x0004, EDID_QUIRK_NON_DESKTOP },
++	{ "OVR", 0x0012, EDID_QUIRK_NON_DESKTOP },
+ 
+ 	/* Windows Mixed Reality Headsets */
+ 	{ "ACR", 0x7fce, EDID_QUIRK_NON_DESKTOP },
+diff --git a/drivers/hid/hid-multitouch.c b/drivers/hid/hid-multitouch.c
+index 03c720b47306..39e4da7468e1 100644
+--- a/drivers/hid/hid-multitouch.c
++++ b/drivers/hid/hid-multitouch.c
+@@ -69,6 +69,7 @@ MODULE_LICENSE("GPL");
+ #define MT_QUIRK_ASUS_CUSTOM_UP		BIT(17)
+ #define MT_QUIRK_WIN8_PTP_BUTTONS	BIT(18)
+ #define MT_QUIRK_SEPARATE_APP_REPORT	BIT(19)
++#define MT_QUIRK_FORCE_MULTI_INPUT	BIT(20)
+ 
+ #define MT_INPUTMODE_TOUCHSCREEN	0x02
+ #define MT_INPUTMODE_TOUCHPAD		0x03
+@@ -189,6 +190,7 @@ static void mt_post_parse(struct mt_device *td, struct mt_application *app);
+ #define MT_CLS_WIN_8				0x0012
+ #define MT_CLS_EXPORT_ALL_INPUTS		0x0013
+ #define MT_CLS_WIN_8_DUAL			0x0014
++#define MT_CLS_WIN_8_FORCE_MULTI_INPUT		0x0015
+ 
+ /* vendor specific classes */
+ #define MT_CLS_3M				0x0101
+@@ -279,6 +281,15 @@ static const struct mt_class mt_classes[] = {
+ 			MT_QUIRK_CONTACT_CNT_ACCURATE |
+ 			MT_QUIRK_WIN8_PTP_BUTTONS,
+ 		.export_all_inputs = true },
++	{ .name = MT_CLS_WIN_8_FORCE_MULTI_INPUT,
++		.quirks = MT_QUIRK_ALWAYS_VALID |
++			MT_QUIRK_IGNORE_DUPLICATES |
++			MT_QUIRK_HOVERING |
++			MT_QUIRK_CONTACT_CNT_ACCURATE |
++			MT_QUIRK_STICKY_FINGERS |
++			MT_QUIRK_WIN8_PTP_BUTTONS |
++			MT_QUIRK_FORCE_MULTI_INPUT,
++		.export_all_inputs = true },
+ 
+ 	/*
+ 	 * vendor specific classes
+@@ -1714,6 +1725,11 @@ static int mt_probe(struct hid_device *hdev, const struct hid_device_id *id)
+ 	if (id->group != HID_GROUP_MULTITOUCH_WIN_8)
+ 		hdev->quirks |= HID_QUIRK_MULTI_INPUT;
+ 
++	if (mtclass->quirks & MT_QUIRK_FORCE_MULTI_INPUT) {
++		hdev->quirks &= ~HID_QUIRK_INPUT_PER_APP;
++		hdev->quirks |= HID_QUIRK_MULTI_INPUT;
++	}
++
+ 	timer_setup(&td->release_timer, mt_expired_timeout, 0);
+ 
+ 	ret = hid_parse(hdev);
+@@ -1926,6 +1942,11 @@ static const struct hid_device_id mt_devices[] = {
+ 		MT_USB_DEVICE(USB_VENDOR_ID_DWAV,
+ 			USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_C002) },
+ 
++	/* Elan devices */
++	{ .driver_data = MT_CLS_WIN_8_FORCE_MULTI_INPUT,
++		HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8,
++			USB_VENDOR_ID_ELAN, 0x313a) },
++
+ 	/* Elitegroup panel */
+ 	{ .driver_data = MT_CLS_SERIAL,
+ 		MT_USB_DEVICE(USB_VENDOR_ID_ELITEGROUP,
+@@ -2056,6 +2077,11 @@ static const struct hid_device_id mt_devices[] = {
+ 		MT_USB_DEVICE(USB_VENDOR_ID_STANTUM_STM,
+ 			USB_DEVICE_ID_MTP_STM)},
+ 
++	/* Synaptics devices */
++	{ .driver_data = MT_CLS_WIN_8_FORCE_MULTI_INPUT,
++		HID_DEVICE(BUS_I2C, HID_GROUP_MULTITOUCH_WIN_8,
++			USB_VENDOR_ID_SYNAPTICS, 0xce08) },
++
+ 	/* TopSeed panels */
+ 	{ .driver_data = MT_CLS_TOPSEED,
+ 		MT_USB_DEVICE(USB_VENDOR_ID_TOPSEED2,
+diff --git a/drivers/hid/hid-sony.c b/drivers/hid/hid-sony.c
+index 4c6ed6ef31f1..2f073f536070 100644
+--- a/drivers/hid/hid-sony.c
++++ b/drivers/hid/hid-sony.c
+@@ -867,6 +867,23 @@ static u8 *sony_report_fixup(struct hid_device *hdev, u8 *rdesc,
+ 	if (sc->quirks & PS3REMOTE)
+ 		return ps3remote_fixup(hdev, rdesc, rsize);
+ 
++	/*
++	 * Some knock-off USB dongles incorrectly report their button count
++	 * as 13 instead of 16 causing three non-functional buttons.
++	 */
++	if ((sc->quirks & SIXAXIS_CONTROLLER_USB) && *rsize >= 45 &&
++		/* Report Count (13) */
++		rdesc[23] == 0x95 && rdesc[24] == 0x0D &&
++		/* Usage Maximum (13) */
++		rdesc[37] == 0x29 && rdesc[38] == 0x0D &&
++		/* Report Count (3) */
++		rdesc[43] == 0x95 && rdesc[44] == 0x03) {
++		hid_info(hdev, "Fixing up USB dongle report descriptor\n");
++		rdesc[24] = 0x10;
++		rdesc[38] = 0x10;
++		rdesc[44] = 0x00;
++	}
++
+ 	return rdesc;
+ }
+ 
+diff --git a/drivers/hid/i2c-hid/i2c-hid-dmi-quirks.c b/drivers/hid/i2c-hid/i2c-hid-dmi-quirks.c
+index a66f08041a1a..ec142bc8c1da 100644
+--- a/drivers/hid/i2c-hid/i2c-hid-dmi-quirks.c
++++ b/drivers/hid/i2c-hid/i2c-hid-dmi-quirks.c
+@@ -389,6 +389,14 @@ static const struct dmi_system_id i2c_hid_dmi_desc_override_table[] = {
+ 		},
+ 		.driver_data = (void *)&sipodev_desc
+ 	},
++	{
++		.ident = "Schneider SCL142ALM",
++		.matches = {
++			DMI_EXACT_MATCH(DMI_SYS_VENDOR, "SCHNEIDER"),
++			DMI_EXACT_MATCH(DMI_PRODUCT_NAME, "SCL142ALM"),
++		},
++		.driver_data = (void *)&sipodev_desc
++	},
+ 	{ }	/* Terminate list */
+ };
+ 
+diff --git a/drivers/i2c/busses/i2c-altera.c b/drivers/i2c/busses/i2c-altera.c
+index 92d2c706c2a7..a60042431370 100644
+--- a/drivers/i2c/busses/i2c-altera.c
++++ b/drivers/i2c/busses/i2c-altera.c
+@@ -70,6 +70,7 @@
+  * @isr_mask: cached copy of local ISR enables.
+  * @isr_status: cached copy of local ISR status.
+  * @lock: spinlock for IRQ synchronization.
++ * @isr_mutex: mutex for IRQ thread.
+  */
+ struct altr_i2c_dev {
+ 	void __iomem *base;
+@@ -86,6 +87,7 @@ struct altr_i2c_dev {
+ 	u32 isr_mask;
+ 	u32 isr_status;
+ 	spinlock_t lock;	/* IRQ synchronization */
++	struct mutex isr_mutex;
+ };
+ 
+ static void
+@@ -245,10 +247,11 @@ static irqreturn_t altr_i2c_isr(int irq, void *_dev)
+ 	struct altr_i2c_dev *idev = _dev;
+ 	u32 status = idev->isr_status;
+ 
++	mutex_lock(&idev->isr_mutex);
+ 	if (!idev->msg) {
+ 		dev_warn(idev->dev, "unexpected interrupt\n");
+ 		altr_i2c_int_clear(idev, ALTR_I2C_ALL_IRQ);
+-		return IRQ_HANDLED;
++		goto out;
+ 	}
+ 	read = (idev->msg->flags & I2C_M_RD) != 0;
+ 
+@@ -301,6 +304,8 @@ static irqreturn_t altr_i2c_isr(int irq, void *_dev)
+ 		complete(&idev->msg_complete);
+ 		dev_dbg(idev->dev, "Message Complete\n");
+ 	}
++out:
++	mutex_unlock(&idev->isr_mutex);
+ 
+ 	return IRQ_HANDLED;
+ }
+@@ -312,6 +317,7 @@ static int altr_i2c_xfer_msg(struct altr_i2c_dev *idev, struct i2c_msg *msg)
+ 	u32 value;
+ 	u8 addr = i2c_8bit_addr_from_msg(msg);
+ 
++	mutex_lock(&idev->isr_mutex);
+ 	idev->msg = msg;
+ 	idev->msg_len = msg->len;
+ 	idev->buf = msg->buf;
+@@ -336,6 +342,7 @@ static int altr_i2c_xfer_msg(struct altr_i2c_dev *idev, struct i2c_msg *msg)
+ 		altr_i2c_int_enable(idev, imask, true);
+ 		altr_i2c_fill_tx_fifo(idev);
+ 	}
++	mutex_unlock(&idev->isr_mutex);
+ 
+ 	time_left = wait_for_completion_timeout(&idev->msg_complete,
+ 						ALTR_I2C_XFER_TIMEOUT);
+@@ -409,6 +416,7 @@ static int altr_i2c_probe(struct platform_device *pdev)
+ 	idev->dev = &pdev->dev;
+ 	init_completion(&idev->msg_complete);
+ 	spin_lock_init(&idev->lock);
++	mutex_init(&idev->isr_mutex);
+ 
+ 	ret = device_property_read_u32(idev->dev, "fifo-size",
+ 				       &idev->fifo_size);
+diff --git a/drivers/net/can/ifi_canfd/ifi_canfd.c b/drivers/net/can/ifi_canfd/ifi_canfd.c
+index 04d59bede5ea..74503cacf594 100644
+--- a/drivers/net/can/ifi_canfd/ifi_canfd.c
++++ b/drivers/net/can/ifi_canfd/ifi_canfd.c
+@@ -947,8 +947,11 @@ static int ifi_canfd_plat_probe(struct platform_device *pdev)
+ 	u32 id, rev;
+ 
+ 	addr = devm_platform_ioremap_resource(pdev, 0);
++	if (IS_ERR(addr))
++		return PTR_ERR(addr);
++
+ 	irq = platform_get_irq(pdev, 0);
+-	if (IS_ERR(addr) || irq < 0)
++	if (irq < 0)
+ 		return -EINVAL;
+ 
+ 	id = readl(addr + IFI_CANFD_IP_ID);
+diff --git a/drivers/net/can/sun4i_can.c b/drivers/net/can/sun4i_can.c
+index e3ba8ab0cbf4..e2c6cf4b2228 100644
+--- a/drivers/net/can/sun4i_can.c
++++ b/drivers/net/can/sun4i_can.c
+@@ -792,7 +792,7 @@ static int sun4ican_probe(struct platform_device *pdev)
+ 
+ 	addr = devm_platform_ioremap_resource(pdev, 0);
+ 	if (IS_ERR(addr)) {
+-		err = -EBUSY;
++		err = PTR_ERR(addr);
+ 		goto exit;
+ 	}
+ 
+diff --git a/drivers/net/dsa/b53/b53_srab.c b/drivers/net/dsa/b53/b53_srab.c
+index 0a1be5259be0..38cd8285ac67 100644
+--- a/drivers/net/dsa/b53/b53_srab.c
++++ b/drivers/net/dsa/b53/b53_srab.c
+@@ -609,7 +609,7 @@ static int b53_srab_probe(struct platform_device *pdev)
+ 
+ 	priv->regs = devm_platform_ioremap_resource(pdev, 0);
+ 	if (IS_ERR(priv->regs))
+-		return -ENOMEM;
++		return PTR_ERR(priv->regs);
+ 
+ 	dev = b53_switch_alloc(&pdev->dev, &b53_srab_ops, priv);
+ 	if (!dev)
+diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c
+index b95425a63a13..797dc48536cc 100644
+--- a/drivers/net/dsa/mt7530.c
++++ b/drivers/net/dsa/mt7530.c
+@@ -818,10 +818,15 @@ mt7530_port_set_vlan_aware(struct dsa_switch *ds, int port)
+ 		   PCR_MATRIX_MASK, PCR_MATRIX(MT7530_ALL_MEMBERS));
+ 
+ 	/* Trapped into security mode allows packet forwarding through VLAN
+-	 * table lookup.
++	 * table lookup. CPU port is set to fallback mode to let untagged
++	 * frames pass through.
+ 	 */
+-	mt7530_rmw(priv, MT7530_PCR_P(port), PCR_PORT_VLAN_MASK,
+-		   MT7530_PORT_SECURITY_MODE);
++	if (dsa_is_cpu_port(ds, port))
++		mt7530_rmw(priv, MT7530_PCR_P(port), PCR_PORT_VLAN_MASK,
++			   MT7530_PORT_FALLBACK_MODE);
++	else
++		mt7530_rmw(priv, MT7530_PCR_P(port), PCR_PORT_VLAN_MASK,
++			   MT7530_PORT_SECURITY_MODE);
+ 
+ 	/* Set the port as a user port which is to be able to recognize VID
+ 	 * from incoming packets before fetching entry within the VLAN table.
+diff --git a/drivers/net/dsa/mt7530.h b/drivers/net/dsa/mt7530.h
+index 0e7e36d8f994..3ef7b5a6fc22 100644
+--- a/drivers/net/dsa/mt7530.h
++++ b/drivers/net/dsa/mt7530.h
+@@ -148,6 +148,12 @@ enum mt7530_port_mode {
+ 	/* Port Matrix Mode: Frames are forwarded by the PCR_MATRIX members. */
+ 	MT7530_PORT_MATRIX_MODE = PORT_VLAN(0),
+ 
++	/* Fallback Mode: Forward received frames with ingress ports that do
++	 * not belong to the VLAN member. Frames whose VID is not listed on
++	 * the VLAN table are forwarded by the PCR_MATRIX members.
++	 */
++	MT7530_PORT_FALLBACK_MODE = PORT_VLAN(1),
++
+ 	/* Security Mode: Discard any frame due to ingress membership
+ 	 * violation or VID missed on the VLAN table.
+ 	 */
+diff --git a/drivers/net/ethernet/apple/bmac.c b/drivers/net/ethernet/apple/bmac.c
+index a58185b1d8bf..3e3711b60d01 100644
+--- a/drivers/net/ethernet/apple/bmac.c
++++ b/drivers/net/ethernet/apple/bmac.c
+@@ -1182,7 +1182,7 @@ bmac_get_station_address(struct net_device *dev, unsigned char *ea)
+ 	int i;
+ 	unsigned short data;
+ 
+-	for (i = 0; i < 6; i++)
++	for (i = 0; i < 3; i++)
+ 		{
+ 			reset_and_select_srom(dev);
+ 			data = read_srom(dev, i + EnetAddressOffset/2, SROMAddressBits);
+diff --git a/drivers/net/ethernet/freescale/ucc_geth.c b/drivers/net/ethernet/freescale/ucc_geth.c
+index 0d101c00286f..ab1b4a77b4a3 100644
+--- a/drivers/net/ethernet/freescale/ucc_geth.c
++++ b/drivers/net/ethernet/freescale/ucc_geth.c
+@@ -42,6 +42,7 @@
+ #include <soc/fsl/qe/ucc.h>
+ #include <soc/fsl/qe/ucc_fast.h>
+ #include <asm/machdep.h>
++#include <net/sch_generic.h>
+ 
+ #include "ucc_geth.h"
+ 
+@@ -1548,11 +1549,8 @@ static int ugeth_disable(struct ucc_geth_private *ugeth, enum comm_dir mode)
+ 
+ static void ugeth_quiesce(struct ucc_geth_private *ugeth)
+ {
+-	/* Prevent any further xmits, plus detach the device. */
+-	netif_device_detach(ugeth->ndev);
+-
+-	/* Wait for any current xmits to finish. */
+-	netif_tx_disable(ugeth->ndev);
++	/* Prevent any further xmits */
++	netif_tx_stop_all_queues(ugeth->ndev);
+ 
+ 	/* Disable the interrupt to avoid NAPI rescheduling. */
+ 	disable_irq(ugeth->ug_info->uf_info.irq);
+@@ -1565,7 +1563,10 @@ static void ugeth_activate(struct ucc_geth_private *ugeth)
+ {
+ 	napi_enable(&ugeth->napi);
+ 	enable_irq(ugeth->ug_info->uf_info.irq);
+-	netif_device_attach(ugeth->ndev);
++
++	/* allow to xmit again  */
++	netif_tx_wake_all_queues(ugeth->ndev);
++	__netdev_watchdog_up(ugeth->ndev);
+ }
+ 
+ /* Called every time the controller might need to be made
+diff --git a/drivers/net/ethernet/marvell/pxa168_eth.c b/drivers/net/ethernet/marvell/pxa168_eth.c
+index 7a0d785b826c..17243bb5ba91 100644
+--- a/drivers/net/ethernet/marvell/pxa168_eth.c
++++ b/drivers/net/ethernet/marvell/pxa168_eth.c
+@@ -1418,7 +1418,7 @@ static int pxa168_eth_probe(struct platform_device *pdev)
+ 
+ 	pep->base = devm_platform_ioremap_resource(pdev, 0);
+ 	if (IS_ERR(pep->base)) {
+-		err = -ENOMEM;
++		err = PTR_ERR(pep->base);
+ 		goto err_netdev;
+ 	}
+ 
+diff --git a/drivers/net/ethernet/smsc/smsc911x.c b/drivers/net/ethernet/smsc/smsc911x.c
+index 49a6a9167af4..fc168f85e7af 100644
+--- a/drivers/net/ethernet/smsc/smsc911x.c
++++ b/drivers/net/ethernet/smsc/smsc911x.c
+@@ -2493,20 +2493,20 @@ static int smsc911x_drv_probe(struct platform_device *pdev)
+ 
+ 	retval = smsc911x_init(dev);
+ 	if (retval < 0)
+-		goto out_disable_resources;
++		goto out_init_fail;
+ 
+ 	netif_carrier_off(dev);
+ 
+ 	retval = smsc911x_mii_init(pdev, dev);
+ 	if (retval) {
+ 		SMSC_WARN(pdata, probe, "Error %i initialising mii", retval);
+-		goto out_disable_resources;
++		goto out_init_fail;
+ 	}
+ 
+ 	retval = register_netdev(dev);
+ 	if (retval) {
+ 		SMSC_WARN(pdata, probe, "Error %i registering device", retval);
+-		goto out_disable_resources;
++		goto out_init_fail;
+ 	} else {
+ 		SMSC_TRACE(pdata, probe,
+ 			   "Network interface: \"%s\"", dev->name);
+@@ -2547,9 +2547,10 @@ static int smsc911x_drv_probe(struct platform_device *pdev)
+ 
+ 	return 0;
+ 
+-out_disable_resources:
++out_init_fail:
+ 	pm_runtime_put(&pdev->dev);
+ 	pm_runtime_disable(&pdev->dev);
++out_disable_resources:
+ 	(void)smsc911x_disable_resources(pdev);
+ out_enable_resources_fail:
+ 	smsc911x_free_resources(pdev);
+diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c
+index 6ae13dc19510..02102c781a8c 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c
++++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-ipq806x.c
+@@ -319,6 +319,19 @@ static int ipq806x_gmac_probe(struct platform_device *pdev)
+ 	/* Enable PTP clock */
+ 	regmap_read(gmac->nss_common, NSS_COMMON_CLK_GATE, &val);
+ 	val |= NSS_COMMON_CLK_GATE_PTP_EN(gmac->id);
++	switch (gmac->phy_mode) {
++	case PHY_INTERFACE_MODE_RGMII:
++		val |= NSS_COMMON_CLK_GATE_RGMII_RX_EN(gmac->id) |
++			NSS_COMMON_CLK_GATE_RGMII_TX_EN(gmac->id);
++		break;
++	case PHY_INTERFACE_MODE_SGMII:
++		val |= NSS_COMMON_CLK_GATE_GMII_RX_EN(gmac->id) |
++				NSS_COMMON_CLK_GATE_GMII_TX_EN(gmac->id);
++		break;
++	default:
++		/* We don't get here; the switch above will have errored out */
++		unreachable();
++	}
+ 	regmap_write(gmac->nss_common, NSS_COMMON_CLK_GATE, val);
+ 
+ 	if (gmac->phy_mode == PHY_INTERFACE_MODE_SGMII) {
+diff --git a/drivers/net/ethernet/ti/cpsw_ale.c b/drivers/net/ethernet/ti/cpsw_ale.c
+index ecdbde539eb7..4eb14b174c1a 100644
+--- a/drivers/net/ethernet/ti/cpsw_ale.c
++++ b/drivers/net/ethernet/ti/cpsw_ale.c
+@@ -917,7 +917,7 @@ struct cpsw_ale *cpsw_ale_create(struct cpsw_ale_params *params)
+ 
+ 	ale = devm_kzalloc(params->dev, sizeof(*ale), GFP_KERNEL);
+ 	if (!ale)
+-		return NULL;
++		return ERR_PTR(-ENOMEM);
+ 
+ 	ale->p0_untag_vid_mask =
+ 		devm_kmalloc_array(params->dev, BITS_TO_LONGS(VLAN_N_VID),
+diff --git a/drivers/net/ethernet/ti/cpsw_priv.c b/drivers/net/ethernet/ti/cpsw_priv.c
+index 97a058ca60ac..d0b6c418a870 100644
+--- a/drivers/net/ethernet/ti/cpsw_priv.c
++++ b/drivers/net/ethernet/ti/cpsw_priv.c
+@@ -490,9 +490,9 @@ int cpsw_init_common(struct cpsw_common *cpsw, void __iomem *ss_regs,
+ 	ale_params.ale_ports		= CPSW_ALE_PORTS_NUM;
+ 
+ 	cpsw->ale = cpsw_ale_create(&ale_params);
+-	if (!cpsw->ale) {
++	if (IS_ERR(cpsw->ale)) {
+ 		dev_err(dev, "error initializing ale engine\n");
+-		return -ENODEV;
++		return PTR_ERR(cpsw->ale);
+ 	}
+ 
+ 	dma_params.dev		= dev;
+diff --git a/drivers/net/ethernet/ti/netcp_ethss.c b/drivers/net/ethernet/ti/netcp_ethss.c
+index fb36115e9c51..fdbae734acce 100644
+--- a/drivers/net/ethernet/ti/netcp_ethss.c
++++ b/drivers/net/ethernet/ti/netcp_ethss.c
+@@ -3704,9 +3704,9 @@ static int gbe_probe(struct netcp_device *netcp_device, struct device *dev,
+ 		ale_params.nu_switch_ale = true;
+ 	}
+ 	gbe_dev->ale = cpsw_ale_create(&ale_params);
+-	if (!gbe_dev->ale) {
++	if (IS_ERR(gbe_dev->ale)) {
+ 		dev_err(gbe_dev->dev, "error initializing ale engine\n");
+-		ret = -ENODEV;
++		ret = PTR_ERR(gbe_dev->ale);
+ 		goto free_sec_ports;
+ 	} else {
+ 		dev_dbg(gbe_dev->dev, "Created a gbe ale engine\n");
+diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c
+index 28e3c5c0e3c3..faca0d84f5af 100644
+--- a/drivers/net/phy/phy_device.c
++++ b/drivers/net/phy/phy_device.c
+@@ -1239,7 +1239,7 @@ int phy_sfp_probe(struct phy_device *phydev,
+ 		  const struct sfp_upstream_ops *ops)
+ {
+ 	struct sfp_bus *bus;
+-	int ret;
++	int ret = 0;
+ 
+ 	if (phydev->mdio.dev.fwnode) {
+ 		bus = sfp_bus_find_fwnode(phydev->mdio.dev.fwnode);
+@@ -1251,7 +1251,7 @@ int phy_sfp_probe(struct phy_device *phydev,
+ 		ret = sfp_bus_add_upstream(bus, phydev, ops);
+ 		sfp_bus_put(bus);
+ 	}
+-	return 0;
++	return ret;
+ }
+ EXPORT_SYMBOL(phy_sfp_probe);
+ 
+diff --git a/drivers/net/wireless/cisco/airo.c b/drivers/net/wireless/cisco/airo.c
+index 8363f91df7ea..827bb6d74815 100644
+--- a/drivers/net/wireless/cisco/airo.c
++++ b/drivers/net/wireless/cisco/airo.c
+@@ -1925,6 +1925,10 @@ static netdev_tx_t mpi_start_xmit(struct sk_buff *skb,
+ 		airo_print_err(dev->name, "%s: skb == NULL!",__func__);
+ 		return NETDEV_TX_OK;
+ 	}
++	if (skb_padto(skb, ETH_ZLEN)) {
++		dev->stats.tx_dropped++;
++		return NETDEV_TX_OK;
++	}
+ 	npacks = skb_queue_len (&ai->txq);
+ 
+ 	if (npacks >= MAXTXQ - 1) {
+@@ -2127,6 +2131,10 @@ static netdev_tx_t airo_start_xmit(struct sk_buff *skb,
+ 		airo_print_err(dev->name, "%s: skb == NULL!", __func__);
+ 		return NETDEV_TX_OK;
+ 	}
++	if (skb_padto(skb, ETH_ZLEN)) {
++		dev->stats.tx_dropped++;
++		return NETDEV_TX_OK;
++	}
+ 
+ 	/* Find a vacant FID */
+ 	for( i = 0; i < MAX_FIDS / 2 && (fids[i] & 0xffff0000); i++ );
+@@ -2201,6 +2209,10 @@ static netdev_tx_t airo_start_xmit11(struct sk_buff *skb,
+ 		airo_print_err(dev->name, "%s: skb == NULL!", __func__);
+ 		return NETDEV_TX_OK;
+ 	}
++	if (skb_padto(skb, ETH_ZLEN)) {
++		dev->stats.tx_dropped++;
++		return NETDEV_TX_OK;
++	}
+ 
+ 	/* Find a vacant FID */
+ 	for( i = MAX_FIDS / 2; i < MAX_FIDS && (fids[i] & 0xffff0000); i++ );
+diff --git a/drivers/net/wireless/intersil/p54/p54usb.c b/drivers/net/wireless/intersil/p54/p54usb.c
+index b94764c88750..ff0e30c0c14c 100644
+--- a/drivers/net/wireless/intersil/p54/p54usb.c
++++ b/drivers/net/wireless/intersil/p54/p54usb.c
+@@ -61,6 +61,7 @@ static const struct usb_device_id p54u_table[] = {
+ 	{USB_DEVICE(0x0db0, 0x6826)},	/* MSI UB54G (MS-6826) */
+ 	{USB_DEVICE(0x107b, 0x55f2)},	/* Gateway WGU-210 (Gemtek) */
+ 	{USB_DEVICE(0x124a, 0x4023)},	/* Shuttle PN15, Airvast WM168g, IOGear GWU513 */
++	{USB_DEVICE(0x124a, 0x4026)},	/* AirVasT USB wireless device */
+ 	{USB_DEVICE(0x1435, 0x0210)},	/* Inventel UR054G */
+ 	{USB_DEVICE(0x15a9, 0x0002)},	/* Gemtek WUBI-100GW 802.11g */
+ 	{USB_DEVICE(0x1630, 0x0005)},	/* 2Wire 802.11g USB (v1) / Z-Com */
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x02.h b/drivers/net/wireless/mediatek/mt76/mt76x02.h
+index 0ca0bbfe8769..c7c601f0348a 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x02.h
++++ b/drivers/net/wireless/mediatek/mt76/mt76x02.h
+@@ -211,6 +211,7 @@ static inline bool is_mt76x0(struct mt76x02_dev *dev)
+ static inline bool is_mt76x2(struct mt76x02_dev *dev)
+ {
+ 	return mt76_chip(&dev->mt76) == 0x7612 ||
++	       mt76_chip(&dev->mt76) == 0x7632 ||
+ 	       mt76_chip(&dev->mt76) == 0x7662 ||
+ 	       mt76_chip(&dev->mt76) == 0x7602;
+ }
+diff --git a/drivers/net/wireless/mediatek/mt76/mt76x2/usb.c b/drivers/net/wireless/mediatek/mt76/mt76x2/usb.c
+index b64ad816cc25..a6a14621e8a9 100644
+--- a/drivers/net/wireless/mediatek/mt76/mt76x2/usb.c
++++ b/drivers/net/wireless/mediatek/mt76/mt76x2/usb.c
+@@ -18,6 +18,7 @@ static const struct usb_device_id mt76x2u_device_table[] = {
+ 	{ USB_DEVICE(0x7392, 0xb711) },	/* Edimax EW 7722 UAC */
+ 	{ USB_DEVICE(0x0846, 0x9053) },	/* Netgear A6210 */
+ 	{ USB_DEVICE(0x045e, 0x02e6) },	/* XBox One Wireless Adapter */
++	{ USB_DEVICE(0x045e, 0x02fe) },	/* XBox One Wireless Adapter */
+ 	{ },
+ };
+ 
+diff --git a/drivers/scsi/scsi_pm.c b/drivers/scsi/scsi_pm.c
+index 3717eea37ecb..5f0ad8b32e3a 100644
+--- a/drivers/scsi/scsi_pm.c
++++ b/drivers/scsi/scsi_pm.c
+@@ -80,6 +80,10 @@ static int scsi_dev_type_resume(struct device *dev,
+ 	dev_dbg(dev, "scsi resume: %d\n", err);
+ 
+ 	if (err == 0) {
++		bool was_runtime_suspended;
++
++		was_runtime_suspended = pm_runtime_suspended(dev);
++
+ 		pm_runtime_disable(dev);
+ 		err = pm_runtime_set_active(dev);
+ 		pm_runtime_enable(dev);
+@@ -93,8 +97,10 @@ static int scsi_dev_type_resume(struct device *dev,
+ 		 */
+ 		if (!err && scsi_is_sdev_device(dev)) {
+ 			struct scsi_device *sdev = to_scsi_device(dev);
+-
+-			blk_set_runtime_active(sdev->request_queue);
++			if (was_runtime_suspended)
++				blk_post_runtime_resume(sdev->request_queue, 0);
++			else
++				blk_set_runtime_active(sdev->request_queue);
+ 		}
+ 	}
+ 
+diff --git a/drivers/staging/media/ipu3/include/intel-ipu3.h b/drivers/staging/media/ipu3/include/intel-ipu3.h
+index 1c9c3ba4d518..a607b0158c81 100644
+--- a/drivers/staging/media/ipu3/include/intel-ipu3.h
++++ b/drivers/staging/media/ipu3/include/intel-ipu3.h
+@@ -450,7 +450,7 @@ struct ipu3_uapi_awb_fr_config_s {
+ 	__u32 bayer_sign;
+ 	__u8 bayer_nf;
+ 	__u8 reserved2[7];
+-} __attribute__((aligned(32))) __packed;
++} __packed;
+ 
+ /**
+  * struct ipu3_uapi_4a_config - 4A config
+@@ -466,7 +466,8 @@ struct ipu3_uapi_4a_config {
+ 	struct ipu3_uapi_ae_grid_config ae_grd_config;
+ 	__u8 padding[20];
+ 	struct ipu3_uapi_af_config_s af_config;
+-	struct ipu3_uapi_awb_fr_config_s awb_fr_config;
++	struct ipu3_uapi_awb_fr_config_s awb_fr_config
++		__attribute__((aligned(32)));
+ } __packed;
+ 
+ /**
+@@ -2477,7 +2478,7 @@ struct ipu3_uapi_acc_param {
+ 	struct ipu3_uapi_yuvp1_yds_config yds2 __attribute__((aligned(32)));
+ 	struct ipu3_uapi_yuvp2_tcc_static_config tcc __attribute__((aligned(32)));
+ 	struct ipu3_uapi_anr_config anr;
+-	struct ipu3_uapi_awb_fr_config_s awb_fr __attribute__((aligned(32)));
++	struct ipu3_uapi_awb_fr_config_s awb_fr;
+ 	struct ipu3_uapi_ae_config ae;
+ 	struct ipu3_uapi_af_config_s af;
+ 	struct ipu3_uapi_awb_config awb;
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index 832e042531bc..c6e1f76a6ee0 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -822,6 +822,7 @@ static struct io_ring_ctx *io_ring_ctx_alloc(struct io_uring_params *p)
+ 		goto err;
+ 
+ 	ctx->flags = p->flags;
++	init_waitqueue_head(&ctx->sqo_wait);
+ 	init_waitqueue_head(&ctx->cq_wait);
+ 	INIT_LIST_HEAD(&ctx->cq_overflow_list);
+ 	init_completion(&ctx->completions[0]);
+@@ -4261,12 +4262,13 @@ static int io_req_defer(struct io_kiocb *req, const struct io_uring_sqe *sqe)
+ 	if (!req_need_defer(req) && list_empty_careful(&ctx->defer_list))
+ 		return 0;
+ 
+-	if (!req->io && io_alloc_async_ctx(req))
+-		return -EAGAIN;
+-
+-	ret = io_req_defer_prep(req, sqe);
+-	if (ret < 0)
+-		return ret;
++	if (!req->io) {
++		if (io_alloc_async_ctx(req))
++			return -EAGAIN;
++		ret = io_req_defer_prep(req, sqe);
++		if (ret < 0)
++			return ret;
++	}
+ 
+ 	spin_lock_irq(&ctx->completion_lock);
+ 	if (!req_need_defer(req) && list_empty(&ctx->defer_list)) {
+@@ -4821,9 +4823,15 @@ fail_req:
+ 			io_double_put_req(req);
+ 		}
+ 	} else if (req->flags & REQ_F_FORCE_ASYNC) {
+-		ret = io_req_defer_prep(req, sqe);
+-		if (unlikely(ret < 0))
+-			goto fail_req;
++		if (!req->io) {
++			ret = -EAGAIN;
++			if (io_alloc_async_ctx(req))
++				goto fail_req;
++			ret = io_req_defer_prep(req, sqe);
++			if (unlikely(ret < 0))
++				goto fail_req;
++		}
++
+ 		/*
+ 		 * Never try inline submit of IOSQE_ASYNC is set, go straight
+ 		 * to async execution.
+@@ -5216,6 +5224,7 @@ static int io_sq_thread(void *data)
+ 				finish_wait(&ctx->sqo_wait, &wait);
+ 
+ 				ctx->rings->sq_flags &= ~IORING_SQ_NEED_WAKEUP;
++				ret = 0;
+ 				continue;
+ 			}
+ 			finish_wait(&ctx->sqo_wait, &wait);
+@@ -6004,7 +6013,6 @@ static int io_sq_offload_start(struct io_ring_ctx *ctx,
+ {
+ 	int ret;
+ 
+-	init_waitqueue_head(&ctx->sqo_wait);
+ 	mmgrab(current->mm);
+ 	ctx->sqo_mm = current->mm;
+ 
+diff --git a/include/uapi/linux/mmc/ioctl.h b/include/uapi/linux/mmc/ioctl.h
+index 00c08120f3ba..27a39847d55c 100644
+--- a/include/uapi/linux/mmc/ioctl.h
++++ b/include/uapi/linux/mmc/ioctl.h
+@@ -3,6 +3,7 @@
+ #define LINUX_MMC_IOCTL_H
+ 
+ #include <linux/types.h>
++#include <linux/major.h>
+ 
+ struct mmc_ioc_cmd {
+ 	/*
+diff --git a/kernel/cgroup/rstat.c b/kernel/cgroup/rstat.c
+index 6f87352f8219..41ca996568df 100644
+--- a/kernel/cgroup/rstat.c
++++ b/kernel/cgroup/rstat.c
+@@ -33,12 +33,9 @@ void cgroup_rstat_updated(struct cgroup *cgrp, int cpu)
+ 		return;
+ 
+ 	/*
+-	 * Paired with the one in cgroup_rstat_cpu_pop_updated().  Either we
+-	 * see NULL updated_next or they see our updated stat.
+-	 */
+-	smp_mb();
+-
+-	/*
++	 * Speculative already-on-list test. This may race leading to
++	 * temporary inaccuracies, which is fine.
++	 *
+ 	 * Because @parent's updated_children is terminated with @parent
+ 	 * instead of NULL, we can tell whether @cgrp is on the list by
+ 	 * testing the next pointer for NULL.
+@@ -134,13 +131,6 @@ static struct cgroup *cgroup_rstat_cpu_pop_updated(struct cgroup *pos,
+ 		*nextp = rstatc->updated_next;
+ 		rstatc->updated_next = NULL;
+ 
+-		/*
+-		 * Paired with the one in cgroup_rstat_cpu_updated().
+-		 * Either they see NULL updated_next or we see their
+-		 * updated stat.
+-		 */
+-		smp_mb();
+-
+ 		return pos;
+ 	}
+ 
+diff --git a/kernel/relay.c b/kernel/relay.c
+index ade14fb7ce2e..4b760ec16342 100644
+--- a/kernel/relay.c
++++ b/kernel/relay.c
+@@ -581,6 +581,11 @@ struct rchan *relay_open(const char *base_filename,
+ 		return NULL;
+ 
+ 	chan->buf = alloc_percpu(struct rchan_buf *);
++	if (!chan->buf) {
++		kfree(chan);
++		return NULL;
++	}
++
+ 	chan->version = RELAYFS_CHANNEL_VERSION;
+ 	chan->n_subbufs = n_subbufs;
+ 	chan->subbuf_size = subbuf_size;
+diff --git a/mm/mremap.c b/mm/mremap.c
+index d28f08a36b96..3a097e02cafe 100644
+--- a/mm/mremap.c
++++ b/mm/mremap.c
+@@ -266,7 +266,7 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
+ 		new_pmd = alloc_new_pmd(vma->vm_mm, vma, new_addr);
+ 		if (!new_pmd)
+ 			break;
+-		if (is_swap_pmd(*old_pmd) || pmd_trans_huge(*old_pmd)) {
++		if (is_swap_pmd(*old_pmd) || pmd_trans_huge(*old_pmd) || pmd_devmap(*old_pmd)) {
+ 			if (extent == HPAGE_PMD_SIZE) {
+ 				bool moved;
+ 				/* See comment in move_ptes() */
+diff --git a/security/integrity/evm/evm_crypto.c b/security/integrity/evm/evm_crypto.c
+index cc826c2767a3..fbc2ee6d46fc 100644
+--- a/security/integrity/evm/evm_crypto.c
++++ b/security/integrity/evm/evm_crypto.c
+@@ -209,7 +209,7 @@ static int evm_calc_hmac_or_hash(struct dentry *dentry,
+ 	data->hdr.length = crypto_shash_digestsize(desc->tfm);
+ 
+ 	error = -ENODATA;
+-	list_for_each_entry_rcu(xattr, &evm_config_xattrnames, list) {
++	list_for_each_entry_lockless(xattr, &evm_config_xattrnames, list) {
+ 		bool is_ima = false;
+ 
+ 		if (strcmp(xattr->name, XATTR_NAME_IMA) == 0)
+diff --git a/security/integrity/evm/evm_main.c b/security/integrity/evm/evm_main.c
+index f9a81b187fae..a2c393385db0 100644
+--- a/security/integrity/evm/evm_main.c
++++ b/security/integrity/evm/evm_main.c
+@@ -99,7 +99,7 @@ static int evm_find_protected_xattrs(struct dentry *dentry)
+ 	if (!(inode->i_opflags & IOP_XATTR))
+ 		return -EOPNOTSUPP;
+ 
+-	list_for_each_entry_rcu(xattr, &evm_config_xattrnames, list) {
++	list_for_each_entry_lockless(xattr, &evm_config_xattrnames, list) {
+ 		error = __vfs_getxattr(dentry, inode, xattr->name, NULL, 0);
+ 		if (error < 0) {
+ 			if (error == -ENODATA)
+@@ -230,7 +230,7 @@ static int evm_protected_xattr(const char *req_xattr_name)
+ 	struct xattr_list *xattr;
+ 
+ 	namelen = strlen(req_xattr_name);
+-	list_for_each_entry_rcu(xattr, &evm_config_xattrnames, list) {
++	list_for_each_entry_lockless(xattr, &evm_config_xattrnames, list) {
+ 		if ((strlen(xattr->name) == namelen)
+ 		    && (strncmp(req_xattr_name, xattr->name, namelen) == 0)) {
+ 			found = 1;
+diff --git a/security/integrity/evm/evm_secfs.c b/security/integrity/evm/evm_secfs.c
+index c11c1f7b3ddd..0f37ef27268d 100644
+--- a/security/integrity/evm/evm_secfs.c
++++ b/security/integrity/evm/evm_secfs.c
+@@ -234,7 +234,14 @@ static ssize_t evm_write_xattrs(struct file *file, const char __user *buf,
+ 		goto out;
+ 	}
+ 
+-	/* Guard against races in evm_read_xattrs */
++	/*
++	 * xattr_list_mutex guards against races in evm_read_xattrs().
++	 * Entries are only added to the evm_config_xattrnames list
++	 * and never deleted. Therefore, the list is traversed
++	 * using list_for_each_entry_lockless() without holding
++	 * the mutex in evm_calc_hmac_or_hash(), evm_find_protected_xattrs()
++	 * and evm_protected_xattr().
++	 */
+ 	mutex_lock(&xattr_list_mutex);
+ 	list_for_each_entry(tmp, &evm_config_xattrnames, list) {
+ 		if (strcmp(xattr->name, tmp->name) == 0) {
+diff --git a/tools/arch/x86/include/uapi/asm/unistd.h b/tools/arch/x86/include/uapi/asm/unistd.h
+index 196fdd02b8b1..30d7d04d72d6 100644
+--- a/tools/arch/x86/include/uapi/asm/unistd.h
++++ b/tools/arch/x86/include/uapi/asm/unistd.h
+@@ -3,7 +3,7 @@
+ #define _UAPI_ASM_X86_UNISTD_H
+ 
+ /* x32 syscall flag bit */
+-#define __X32_SYSCALL_BIT	0x40000000UL
++#define __X32_SYSCALL_BIT	0x40000000
+ 
+ #ifndef __KERNEL__
+ # ifdef __i386__
+diff --git a/tools/testing/selftests/drivers/net/mlxsw/qos_mc_aware.sh b/tools/testing/selftests/drivers/net/mlxsw/qos_mc_aware.sh
+index 24dd8ed48580..b025daea062d 100755
+--- a/tools/testing/selftests/drivers/net/mlxsw/qos_mc_aware.sh
++++ b/tools/testing/selftests/drivers/net/mlxsw/qos_mc_aware.sh
+@@ -300,7 +300,7 @@ test_uc_aware()
+ 	local i
+ 
+ 	for ((i = 0; i < attempts; ++i)); do
+-		if $ARPING -c 1 -I $h1 -b 192.0.2.66 -q -w 0.1; then
++		if $ARPING -c 1 -I $h1 -b 192.0.2.66 -q -w 1; then
+ 			((passes++))
+ 		fi
+ 
+diff --git a/tools/testing/selftests/wireguard/qemu/Makefile b/tools/testing/selftests/wireguard/qemu/Makefile
+index 90598a425c18..4bdd6c1a19d3 100644
+--- a/tools/testing/selftests/wireguard/qemu/Makefile
++++ b/tools/testing/selftests/wireguard/qemu/Makefile
+@@ -44,7 +44,7 @@ endef
+ $(eval $(call tar_download,MUSL,musl,1.2.0,.tar.gz,https://musl.libc.org/releases/,c6de7b191139142d3f9a7b5b702c9cae1b5ee6e7f57e582da9328629408fd4e8))
+ $(eval $(call tar_download,IPERF,iperf,3.7,.tar.gz,https://downloads.es.net/pub/iperf/,d846040224317caf2f75c843d309a950a7db23f9b44b94688ccbe557d6d1710c))
+ $(eval $(call tar_download,BASH,bash,5.0,.tar.gz,https://ftp.gnu.org/gnu/bash/,b4a80f2ac66170b2913efbfb9f2594f1f76c7b1afd11f799e22035d63077fb4d))
+-$(eval $(call tar_download,IPROUTE2,iproute2,5.4.0,.tar.xz,https://www.kernel.org/pub/linux/utils/net/iproute2/,fe97aa60a0d4c5ac830be18937e18dc3400ca713a33a89ad896ff1e3d46086ae))
++$(eval $(call tar_download,IPROUTE2,iproute2,5.6.0,.tar.xz,https://www.kernel.org/pub/linux/utils/net/iproute2/,1b5b0e25ce6e23da7526ea1da044e814ad85ba761b10dd29c2b027c056b04692))
+ $(eval $(call tar_download,IPTABLES,iptables,1.8.4,.tar.bz2,https://www.netfilter.org/projects/iptables/files/,993a3a5490a544c2cbf2ef15cf7e7ed21af1845baf228318d5c36ef8827e157c))
+ $(eval $(call tar_download,NMAP,nmap,7.80,.tar.bz2,https://nmap.org/dist/,fcfa5a0e42099e12e4bf7a68ebe6fde05553383a682e816a7ec9256ab4773faa))
+ $(eval $(call tar_download,IPUTILS,iputils,s20190709,.tar.gz,https://github.com/iputils/iputils/archive/s20190709.tar.gz/#,a15720dd741d7538dd2645f9f516d193636ae4300ff7dbc8bfca757bf166490a))


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [gentoo-commits] proj/linux-patches:5.6 commit in: /
@ 2020-06-10 19:41 Mike Pagano
  0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2020-06-10 19:41 UTC (permalink / raw
  To: gentoo-commits

commit:     e6b558ca6926f73ffd8c036e6f4096dae8edfb21
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Jun 10 19:40:53 2020 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Jun 10 19:40:53 2020 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=e6b558ca

Linux patch 5.6.18

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1017_linux-5.6.18.patch | 1809 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 1813 insertions(+)

diff --git a/0000_README b/0000_README
index 07595c4..fd785d4 100644
--- a/0000_README
+++ b/0000_README
@@ -111,6 +111,10 @@ Patch:  1016_linux-5.6.17.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.6.17
 
+Patch:  1017_linux-5.6.18.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.6.18
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1017_linux-5.6.18.patch b/1017_linux-5.6.18.patch
new file mode 100644
index 0000000..9169925
--- /dev/null
+++ b/1017_linux-5.6.18.patch
@@ -0,0 +1,1809 @@
+diff --git a/Documentation/ABI/testing/sysfs-devices-system-cpu b/Documentation/ABI/testing/sysfs-devices-system-cpu
+index 2e0e3b45d02a..b39531a3c5bc 100644
+--- a/Documentation/ABI/testing/sysfs-devices-system-cpu
++++ b/Documentation/ABI/testing/sysfs-devices-system-cpu
+@@ -492,6 +492,7 @@ What:		/sys/devices/system/cpu/vulnerabilities
+ 		/sys/devices/system/cpu/vulnerabilities/spec_store_bypass
+ 		/sys/devices/system/cpu/vulnerabilities/l1tf
+ 		/sys/devices/system/cpu/vulnerabilities/mds
++		/sys/devices/system/cpu/vulnerabilities/srbds
+ 		/sys/devices/system/cpu/vulnerabilities/tsx_async_abort
+ 		/sys/devices/system/cpu/vulnerabilities/itlb_multihit
+ Date:		January 2018
+diff --git a/Documentation/admin-guide/hw-vuln/index.rst b/Documentation/admin-guide/hw-vuln/index.rst
+index 0795e3c2643f..ca4dbdd9016d 100644
+--- a/Documentation/admin-guide/hw-vuln/index.rst
++++ b/Documentation/admin-guide/hw-vuln/index.rst
+@@ -14,3 +14,4 @@ are configurable at compile, boot or run time.
+    mds
+    tsx_async_abort
+    multihit.rst
++   special-register-buffer-data-sampling.rst
+diff --git a/Documentation/admin-guide/hw-vuln/special-register-buffer-data-sampling.rst b/Documentation/admin-guide/hw-vuln/special-register-buffer-data-sampling.rst
+new file mode 100644
+index 000000000000..47b1b3afac99
+--- /dev/null
++++ b/Documentation/admin-guide/hw-vuln/special-register-buffer-data-sampling.rst
+@@ -0,0 +1,149 @@
++.. SPDX-License-Identifier: GPL-2.0
++
++SRBDS - Special Register Buffer Data Sampling
++=============================================
++
++SRBDS is a hardware vulnerability that allows MDS :doc:`mds` techniques to
++infer values returned from special register accesses.  Special register
++accesses are accesses to off core registers.  According to Intel's evaluation,
++the special register reads that have a security expectation of privacy are
++RDRAND, RDSEED and SGX EGETKEY.
++
++When RDRAND, RDSEED and EGETKEY instructions are used, the data is moved
++to the core through the special register mechanism that is susceptible
++to MDS attacks.
++
++Affected processors
++--------------------
++Core models (desktop, mobile, Xeon-E3) that implement RDRAND and/or RDSEED may
++be affected.
++
++A processor is affected by SRBDS if its Family_Model and stepping is
++in the following list, with the exception of the listed processors
++exporting MDS_NO while Intel TSX is available yet not enabled. The
++latter class of processors are only affected when Intel TSX is enabled
++by software using TSX_CTRL_MSR otherwise they are not affected.
++
++  =============  ============  ========
++  common name    Family_Model  Stepping
++  =============  ============  ========
++  IvyBridge      06_3AH        All
++
++  Haswell        06_3CH        All
++  Haswell_L      06_45H        All
++  Haswell_G      06_46H        All
++
++  Broadwell_G    06_47H        All
++  Broadwell      06_3DH        All
++
++  Skylake_L      06_4EH        All
++  Skylake        06_5EH        All
++
++  Kabylake_L     06_8EH        <= 0xC
++  Kabylake       06_9EH        <= 0xD
++  =============  ============  ========
++
++Related CVEs
++------------
++
++The following CVE entry is related to this SRBDS issue:
++
++    ==============  =====  =====================================
++    CVE-2020-0543   SRBDS  Special Register Buffer Data Sampling
++    ==============  =====  =====================================
++
++Attack scenarios
++----------------
++An unprivileged user can extract values returned from RDRAND and RDSEED
++executed on another core or sibling thread using MDS techniques.
++
++
++Mitigation mechanism
++-------------------
++Intel will release microcode updates that modify the RDRAND, RDSEED, and
++EGETKEY instructions to overwrite secret special register data in the shared
++staging buffer before the secret data can be accessed by another logical
++processor.
++
++During execution of the RDRAND, RDSEED, or EGETKEY instructions, off-core
++accesses from other logical processors will be delayed until the special
++register read is complete and the secret data in the shared staging buffer is
++overwritten.
++
++This has three effects on performance:
++
++#. RDRAND, RDSEED, or EGETKEY instructions have higher latency.
++
++#. Executing RDRAND at the same time on multiple logical processors will be
++   serialized, resulting in an overall reduction in the maximum RDRAND
++   bandwidth.
++
++#. Executing RDRAND, RDSEED or EGETKEY will delay memory accesses from other
++   logical processors that miss their core caches, with an impact similar to
++   legacy locked cache-line-split accesses.
++
++The microcode updates provide an opt-out mechanism (RNGDS_MITG_DIS) to disable
++the mitigation for RDRAND and RDSEED instructions executed outside of Intel
++Software Guard Extensions (Intel SGX) enclaves. On logical processors that
++disable the mitigation using this opt-out mechanism, RDRAND and RDSEED do not
++take longer to execute and do not impact performance of sibling logical
++processors memory accesses. The opt-out mechanism does not affect Intel SGX
++enclaves (including execution of RDRAND or RDSEED inside an enclave, as well
++as EGETKEY execution).
++
++IA32_MCU_OPT_CTRL MSR Definition
++--------------------------------
++Along with the mitigation for this issue, Intel added a new thread-scope
++IA32_MCU_OPT_CTRL MSR, (address 0x123). The presence of this MSR and
++RNGDS_MITG_DIS (bit 0) is enumerated by CPUID.(EAX=07H,ECX=0).EDX[SRBDS_CTRL =
++9]==1. This MSR is introduced through the microcode update.
++
++Setting IA32_MCU_OPT_CTRL[0] (RNGDS_MITG_DIS) to 1 for a logical processor
++disables the mitigation for RDRAND and RDSEED executed outside of an Intel SGX
++enclave on that logical processor. Opting out of the mitigation for a
++particular logical processor does not affect the RDRAND and RDSEED mitigations
++for other logical processors.
++
++Note that inside of an Intel SGX enclave, the mitigation is applied regardless
++of the value of RNGDS_MITG_DS.
++
++Mitigation control on the kernel command line
++---------------------------------------------
++The kernel command line allows control over the SRBDS mitigation at boot time
++with the option "srbds=".  The option for this is:
++
++  ============= =============================================================
++  off           This option disables SRBDS mitigation for RDRAND and RDSEED on
++                affected platforms.
++  ============= =============================================================
++
++SRBDS System Information
++-----------------------
++The Linux kernel provides vulnerability status information through sysfs.  For
++SRBDS this can be accessed by the following sysfs file:
++/sys/devices/system/cpu/vulnerabilities/srbds
++
++The possible values contained in this file are:
++
++ ============================== =============================================
++ Not affected                   Processor not vulnerable
++ Vulnerable                     Processor vulnerable and mitigation disabled
++ Vulnerable: No microcode       Processor vulnerable and microcode is missing
++                                mitigation
++ Mitigation: Microcode          Processor is vulnerable and mitigation is in
++                                effect.
++ Mitigation: TSX disabled       Processor is only vulnerable when TSX is
++                                enabled while this system was booted with TSX
++                                disabled.
++ Unknown: Dependent on
++ hypervisor status              Running on virtual guest processor that is
++                                affected but with no way to know if host
++                                processor is mitigated or vulnerable.
++ ============================== =============================================
++
++SRBDS Default mitigation
++------------------------
++This new microcode serializes processor access during execution of RDRAND,
++RDSEED ensures that the shared buffer is overwritten before it is released for
++reuse.  Use the "srbds=off" kernel command line to disable the mitigation for
++RDRAND and RDSEED.
+diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
+index 20aac805e197..bb498e7ae2da 100644
+--- a/Documentation/admin-guide/kernel-parameters.txt
++++ b/Documentation/admin-guide/kernel-parameters.txt
+@@ -4659,6 +4659,26 @@
+ 	spia_pedr=
+ 	spia_peddr=
+ 
++	srbds=		[X86,INTEL]
++			Control the Special Register Buffer Data Sampling
++			(SRBDS) mitigation.
++
++			Certain CPUs are vulnerable to an MDS-like
++			exploit which can leak bits from the random
++			number generator.
++
++			By default, this issue is mitigated by
++			microcode.  However, the microcode fix can cause
++			the RDRAND and RDSEED instructions to become
++			much slower.  Among other effects, this will
++			result in reduced throughput from /dev/urandom.
++
++			The microcode mitigation can be disabled with
++			the following option:
++
++			off:    Disable mitigation and remove
++				performance impact to RDRAND and RDSEED
++
+ 	srcutree.counter_wrap_check [KNL]
+ 			Specifies how frequently to check for
+ 			grace-period sequence counter wrap for the
+diff --git a/Makefile b/Makefile
+index 8254beb87a7b..2948731a235c 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 6
+-SUBLEVEL = 17
++SUBLEVEL = 18
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+ 
+diff --git a/arch/x86/include/asm/cpu_device_id.h b/arch/x86/include/asm/cpu_device_id.h
+index 31c379c1da41..0c814cd9ea42 100644
+--- a/arch/x86/include/asm/cpu_device_id.h
++++ b/arch/x86/include/asm/cpu_device_id.h
+@@ -9,6 +9,36 @@
+ 
+ #include <linux/mod_devicetable.h>
+ 
++#define X86_CENTAUR_FAM6_C7_D		0xd
++#define X86_CENTAUR_FAM6_NANO		0xf
++
++#define X86_STEPPINGS(mins, maxs)    GENMASK(maxs, mins)
++
++/**
++ * X86_MATCH_VENDOR_FAM_MODEL_STEPPINGS_FEATURE - Base macro for CPU matching
++ * @_vendor:	The vendor name, e.g. INTEL, AMD, HYGON, ..., ANY
++ *		The name is expanded to X86_VENDOR_@_vendor
++ * @_family:	The family number or X86_FAMILY_ANY
++ * @_model:	The model number, model constant or X86_MODEL_ANY
++ * @_steppings:	Bitmask for steppings, stepping constant or X86_STEPPING_ANY
++ * @_feature:	A X86_FEATURE bit or X86_FEATURE_ANY
++ * @_data:	Driver specific data or NULL. The internal storage
++ *		format is unsigned long. The supplied value, pointer
++ *		etc. is casted to unsigned long internally.
++ *
++ * Backport version to keep the SRBDS pile consistant. No shorter variants
++ * required for this.
++ */
++#define X86_MATCH_VENDOR_FAM_MODEL_STEPPINGS_FEATURE(_vendor, _family, _model, \
++						    _steppings, _feature, _data) { \
++	.vendor		= X86_VENDOR_##_vendor,				\
++	.family		= _family,					\
++	.model		= _model,					\
++	.steppings	= _steppings,					\
++	.feature	= _feature,					\
++	.driver_data	= (unsigned long) _data				\
++}
++
+ /*
+  * Match specific microcode revisions.
+  *
+diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
+index f3327cb56edf..69f7dcb1fa5c 100644
+--- a/arch/x86/include/asm/cpufeatures.h
++++ b/arch/x86/include/asm/cpufeatures.h
+@@ -360,6 +360,7 @@
+ #define X86_FEATURE_AVX512_4FMAPS	(18*32+ 3) /* AVX-512 Multiply Accumulation Single precision */
+ #define X86_FEATURE_FSRM		(18*32+ 4) /* Fast Short Rep Mov */
+ #define X86_FEATURE_AVX512_VP2INTERSECT (18*32+ 8) /* AVX-512 Intersect for D/Q */
++#define X86_FEATURE_SRBDS_CTRL		(18*32+ 9) /* "" SRBDS mitigation MSR available */
+ #define X86_FEATURE_MD_CLEAR		(18*32+10) /* VERW clears CPU buffers */
+ #define X86_FEATURE_TSX_FORCE_ABORT	(18*32+13) /* "" TSX_FORCE_ABORT */
+ #define X86_FEATURE_PCONFIG		(18*32+18) /* Intel PCONFIG */
+@@ -404,5 +405,6 @@
+ #define X86_BUG_SWAPGS			X86_BUG(21) /* CPU is affected by speculation through SWAPGS */
+ #define X86_BUG_TAA			X86_BUG(22) /* CPU is affected by TSX Async Abort(TAA) */
+ #define X86_BUG_ITLB_MULTIHIT		X86_BUG(23) /* CPU may incur MCE during certain page attribute changes */
++#define X86_BUG_SRBDS			X86_BUG(24) /* CPU may leak RNG bits if not mitigated */
+ 
+ #endif /* _ASM_X86_CPUFEATURES_H */
+diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
+index d5e517d1c3dd..af64c8e80ff4 100644
+--- a/arch/x86/include/asm/msr-index.h
++++ b/arch/x86/include/asm/msr-index.h
+@@ -119,6 +119,10 @@
+ #define TSX_CTRL_RTM_DISABLE		BIT(0)	/* Disable RTM feature */
+ #define TSX_CTRL_CPUID_CLEAR		BIT(1)	/* Disable TSX enumeration */
+ 
++/* SRBDS support */
++#define MSR_IA32_MCU_OPT_CTRL		0x00000123
++#define RNGDS_MITG_DIS			BIT(0)
++
+ #define MSR_IA32_SYSENTER_CS		0x00000174
+ #define MSR_IA32_SYSENTER_ESP		0x00000175
+ #define MSR_IA32_SYSENTER_EIP		0x00000176
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index ed54b3b21c39..56978cb06149 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -41,6 +41,7 @@ static void __init l1tf_select_mitigation(void);
+ static void __init mds_select_mitigation(void);
+ static void __init mds_print_mitigation(void);
+ static void __init taa_select_mitigation(void);
++static void __init srbds_select_mitigation(void);
+ 
+ /* The base value of the SPEC_CTRL MSR that always has to be preserved. */
+ u64 x86_spec_ctrl_base;
+@@ -108,6 +109,7 @@ void __init check_bugs(void)
+ 	l1tf_select_mitigation();
+ 	mds_select_mitigation();
+ 	taa_select_mitigation();
++	srbds_select_mitigation();
+ 
+ 	/*
+ 	 * As MDS and TAA mitigations are inter-related, print MDS
+@@ -397,6 +399,97 @@ static int __init tsx_async_abort_parse_cmdline(char *str)
+ }
+ early_param("tsx_async_abort", tsx_async_abort_parse_cmdline);
+ 
++#undef pr_fmt
++#define pr_fmt(fmt)	"SRBDS: " fmt
++
++enum srbds_mitigations {
++	SRBDS_MITIGATION_OFF,
++	SRBDS_MITIGATION_UCODE_NEEDED,
++	SRBDS_MITIGATION_FULL,
++	SRBDS_MITIGATION_TSX_OFF,
++	SRBDS_MITIGATION_HYPERVISOR,
++};
++
++static enum srbds_mitigations srbds_mitigation __ro_after_init = SRBDS_MITIGATION_FULL;
++
++static const char * const srbds_strings[] = {
++	[SRBDS_MITIGATION_OFF]		= "Vulnerable",
++	[SRBDS_MITIGATION_UCODE_NEEDED]	= "Vulnerable: No microcode",
++	[SRBDS_MITIGATION_FULL]		= "Mitigation: Microcode",
++	[SRBDS_MITIGATION_TSX_OFF]	= "Mitigation: TSX disabled",
++	[SRBDS_MITIGATION_HYPERVISOR]	= "Unknown: Dependent on hypervisor status",
++};
++
++static bool srbds_off;
++
++void update_srbds_msr(void)
++{
++	u64 mcu_ctrl;
++
++	if (!boot_cpu_has_bug(X86_BUG_SRBDS))
++		return;
++
++	if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
++		return;
++
++	if (srbds_mitigation == SRBDS_MITIGATION_UCODE_NEEDED)
++		return;
++
++	rdmsrl(MSR_IA32_MCU_OPT_CTRL, mcu_ctrl);
++
++	switch (srbds_mitigation) {
++	case SRBDS_MITIGATION_OFF:
++	case SRBDS_MITIGATION_TSX_OFF:
++		mcu_ctrl |= RNGDS_MITG_DIS;
++		break;
++	case SRBDS_MITIGATION_FULL:
++		mcu_ctrl &= ~RNGDS_MITG_DIS;
++		break;
++	default:
++		break;
++	}
++
++	wrmsrl(MSR_IA32_MCU_OPT_CTRL, mcu_ctrl);
++}
++
++static void __init srbds_select_mitigation(void)
++{
++	u64 ia32_cap;
++
++	if (!boot_cpu_has_bug(X86_BUG_SRBDS))
++		return;
++
++	/*
++	 * Check to see if this is one of the MDS_NO systems supporting
++	 * TSX that are only exposed to SRBDS when TSX is enabled.
++	 */
++	ia32_cap = x86_read_arch_cap_msr();
++	if ((ia32_cap & ARCH_CAP_MDS_NO) && !boot_cpu_has(X86_FEATURE_RTM))
++		srbds_mitigation = SRBDS_MITIGATION_TSX_OFF;
++	else if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
++		srbds_mitigation = SRBDS_MITIGATION_HYPERVISOR;
++	else if (!boot_cpu_has(X86_FEATURE_SRBDS_CTRL))
++		srbds_mitigation = SRBDS_MITIGATION_UCODE_NEEDED;
++	else if (cpu_mitigations_off() || srbds_off)
++		srbds_mitigation = SRBDS_MITIGATION_OFF;
++
++	update_srbds_msr();
++	pr_info("%s\n", srbds_strings[srbds_mitigation]);
++}
++
++static int __init srbds_parse_cmdline(char *str)
++{
++	if (!str)
++		return -EINVAL;
++
++	if (!boot_cpu_has_bug(X86_BUG_SRBDS))
++		return 0;
++
++	srbds_off = !strcmp(str, "off");
++	return 0;
++}
++early_param("srbds", srbds_parse_cmdline);
++
+ #undef pr_fmt
+ #define pr_fmt(fmt)     "Spectre V1 : " fmt
+ 
+@@ -1528,6 +1621,11 @@ static char *ibpb_state(void)
+ 	return "";
+ }
+ 
++static ssize_t srbds_show_state(char *buf)
++{
++	return sprintf(buf, "%s\n", srbds_strings[srbds_mitigation]);
++}
++
+ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr,
+ 			       char *buf, unsigned int bug)
+ {
+@@ -1572,6 +1670,9 @@ static ssize_t cpu_show_common(struct device *dev, struct device_attribute *attr
+ 	case X86_BUG_ITLB_MULTIHIT:
+ 		return itlb_multihit_show_state(buf);
+ 
++	case X86_BUG_SRBDS:
++		return srbds_show_state(buf);
++
+ 	default:
+ 		break;
+ 	}
+@@ -1618,4 +1719,9 @@ ssize_t cpu_show_itlb_multihit(struct device *dev, struct device_attribute *attr
+ {
+ 	return cpu_show_common(dev, attr, buf, X86_BUG_ITLB_MULTIHIT);
+ }
++
++ssize_t cpu_show_srbds(struct device *dev, struct device_attribute *attr, char *buf)
++{
++	return cpu_show_common(dev, attr, buf, X86_BUG_SRBDS);
++}
+ #endif
+diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
+index 4cdb123ff66a..0567448124e1 100644
+--- a/arch/x86/kernel/cpu/common.c
++++ b/arch/x86/kernel/cpu/common.c
+@@ -1075,9 +1075,30 @@ static const __initconst struct x86_cpu_id cpu_vuln_whitelist[] = {
+ 	{}
+ };
+ 
+-static bool __init cpu_matches(unsigned long which)
++#define VULNBL_INTEL_STEPPINGS(model, steppings, issues)		   \
++	X86_MATCH_VENDOR_FAM_MODEL_STEPPINGS_FEATURE(INTEL, 6,		   \
++					    INTEL_FAM6_##model, steppings, \
++					    X86_FEATURE_ANY, issues)
++
++#define SRBDS		BIT(0)
++
++static const struct x86_cpu_id cpu_vuln_blacklist[] __initconst = {
++	VULNBL_INTEL_STEPPINGS(IVYBRIDGE,	X86_STEPPING_ANY,		SRBDS),
++	VULNBL_INTEL_STEPPINGS(HASWELL,		X86_STEPPING_ANY,		SRBDS),
++	VULNBL_INTEL_STEPPINGS(HASWELL_L,	X86_STEPPING_ANY,		SRBDS),
++	VULNBL_INTEL_STEPPINGS(HASWELL_G,	X86_STEPPING_ANY,		SRBDS),
++	VULNBL_INTEL_STEPPINGS(BROADWELL_G,	X86_STEPPING_ANY,		SRBDS),
++	VULNBL_INTEL_STEPPINGS(BROADWELL,	X86_STEPPING_ANY,		SRBDS),
++	VULNBL_INTEL_STEPPINGS(SKYLAKE_L,	X86_STEPPING_ANY,		SRBDS),
++	VULNBL_INTEL_STEPPINGS(SKYLAKE,		X86_STEPPING_ANY,		SRBDS),
++	VULNBL_INTEL_STEPPINGS(KABYLAKE_L,	X86_STEPPINGS(0x0, 0xC),	SRBDS),
++	VULNBL_INTEL_STEPPINGS(KABYLAKE,	X86_STEPPINGS(0x0, 0xD),	SRBDS),
++	{}
++};
++
++static bool __init cpu_matches(const struct x86_cpu_id *table, unsigned long which)
+ {
+-	const struct x86_cpu_id *m = x86_match_cpu(cpu_vuln_whitelist);
++	const struct x86_cpu_id *m = x86_match_cpu(table);
+ 
+ 	return m && !!(m->driver_data & which);
+ }
+@@ -1097,31 +1118,34 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+ 	u64 ia32_cap = x86_read_arch_cap_msr();
+ 
+ 	/* Set ITLB_MULTIHIT bug if cpu is not in the whitelist and not mitigated */
+-	if (!cpu_matches(NO_ITLB_MULTIHIT) && !(ia32_cap & ARCH_CAP_PSCHANGE_MC_NO))
++	if (!cpu_matches(cpu_vuln_whitelist, NO_ITLB_MULTIHIT) &&
++	    !(ia32_cap & ARCH_CAP_PSCHANGE_MC_NO))
+ 		setup_force_cpu_bug(X86_BUG_ITLB_MULTIHIT);
+ 
+-	if (cpu_matches(NO_SPECULATION))
++	if (cpu_matches(cpu_vuln_whitelist, NO_SPECULATION))
+ 		return;
+ 
+ 	setup_force_cpu_bug(X86_BUG_SPECTRE_V1);
+ 
+-	if (!cpu_matches(NO_SPECTRE_V2))
++	if (!cpu_matches(cpu_vuln_whitelist, NO_SPECTRE_V2))
+ 		setup_force_cpu_bug(X86_BUG_SPECTRE_V2);
+ 
+-	if (!cpu_matches(NO_SSB) && !(ia32_cap & ARCH_CAP_SSB_NO) &&
++	if (!cpu_matches(cpu_vuln_whitelist, NO_SSB) &&
++	    !(ia32_cap & ARCH_CAP_SSB_NO) &&
+ 	   !cpu_has(c, X86_FEATURE_AMD_SSB_NO))
+ 		setup_force_cpu_bug(X86_BUG_SPEC_STORE_BYPASS);
+ 
+ 	if (ia32_cap & ARCH_CAP_IBRS_ALL)
+ 		setup_force_cpu_cap(X86_FEATURE_IBRS_ENHANCED);
+ 
+-	if (!cpu_matches(NO_MDS) && !(ia32_cap & ARCH_CAP_MDS_NO)) {
++	if (!cpu_matches(cpu_vuln_whitelist, NO_MDS) &&
++	    !(ia32_cap & ARCH_CAP_MDS_NO)) {
+ 		setup_force_cpu_bug(X86_BUG_MDS);
+-		if (cpu_matches(MSBDS_ONLY))
++		if (cpu_matches(cpu_vuln_whitelist, MSBDS_ONLY))
+ 			setup_force_cpu_bug(X86_BUG_MSBDS_ONLY);
+ 	}
+ 
+-	if (!cpu_matches(NO_SWAPGS))
++	if (!cpu_matches(cpu_vuln_whitelist, NO_SWAPGS))
+ 		setup_force_cpu_bug(X86_BUG_SWAPGS);
+ 
+ 	/*
+@@ -1139,7 +1163,16 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+ 	     (ia32_cap & ARCH_CAP_TSX_CTRL_MSR)))
+ 		setup_force_cpu_bug(X86_BUG_TAA);
+ 
+-	if (cpu_matches(NO_MELTDOWN))
++	/*
++	 * SRBDS affects CPUs which support RDRAND or RDSEED and are listed
++	 * in the vulnerability blacklist.
++	 */
++	if ((cpu_has(c, X86_FEATURE_RDRAND) ||
++	     cpu_has(c, X86_FEATURE_RDSEED)) &&
++	    cpu_matches(cpu_vuln_blacklist, SRBDS))
++		    setup_force_cpu_bug(X86_BUG_SRBDS);
++
++	if (cpu_matches(cpu_vuln_whitelist, NO_MELTDOWN))
+ 		return;
+ 
+ 	/* Rogue Data Cache Load? No! */
+@@ -1148,7 +1181,7 @@ static void __init cpu_set_bug_bits(struct cpuinfo_x86 *c)
+ 
+ 	setup_force_cpu_bug(X86_BUG_CPU_MELTDOWN);
+ 
+-	if (cpu_matches(NO_L1TF))
++	if (cpu_matches(cpu_vuln_whitelist, NO_L1TF))
+ 		return;
+ 
+ 	setup_force_cpu_bug(X86_BUG_L1TF);
+@@ -1589,6 +1622,7 @@ void identify_secondary_cpu(struct cpuinfo_x86 *c)
+ 	mtrr_ap_init();
+ 	validate_apic_and_package_id(c);
+ 	x86_spec_ctrl_setup_ap();
++	update_srbds_msr();
+ }
+ 
+ static __init int setup_noclflush(char *arg)
+diff --git a/arch/x86/kernel/cpu/cpu.h b/arch/x86/kernel/cpu/cpu.h
+index 37fdefd14f28..fb538fccd24c 100644
+--- a/arch/x86/kernel/cpu/cpu.h
++++ b/arch/x86/kernel/cpu/cpu.h
+@@ -77,6 +77,7 @@ extern void detect_ht(struct cpuinfo_x86 *c);
+ unsigned int aperfmperf_get_khz(int cpu);
+ 
+ extern void x86_spec_ctrl_setup_ap(void);
++extern void update_srbds_msr(void);
+ 
+ extern u64 x86_read_arch_cap_msr(void);
+ 
+diff --git a/arch/x86/kernel/cpu/match.c b/arch/x86/kernel/cpu/match.c
+index 6dd78d8235e4..2f163e6646b6 100644
+--- a/arch/x86/kernel/cpu/match.c
++++ b/arch/x86/kernel/cpu/match.c
+@@ -34,13 +34,18 @@ const struct x86_cpu_id *x86_match_cpu(const struct x86_cpu_id *match)
+ 	const struct x86_cpu_id *m;
+ 	struct cpuinfo_x86 *c = &boot_cpu_data;
+ 
+-	for (m = match; m->vendor | m->family | m->model | m->feature; m++) {
++	for (m = match;
++	     m->vendor | m->family | m->model | m->steppings | m->feature;
++	     m++) {
+ 		if (m->vendor != X86_VENDOR_ANY && c->x86_vendor != m->vendor)
+ 			continue;
+ 		if (m->family != X86_FAMILY_ANY && c->x86 != m->family)
+ 			continue;
+ 		if (m->model != X86_MODEL_ANY && c->x86_model != m->model)
+ 			continue;
++		if (m->steppings != X86_STEPPING_ANY &&
++		    !(BIT(c->x86_stepping) & m->steppings))
++			continue;
+ 		if (m->feature != X86_FEATURE_ANY && !cpu_has(c, m->feature))
+ 			continue;
+ 		return m;
+diff --git a/drivers/base/cpu.c b/drivers/base/cpu.c
+index 6265871a4af2..f00da44ae6fe 100644
+--- a/drivers/base/cpu.c
++++ b/drivers/base/cpu.c
+@@ -567,6 +567,12 @@ ssize_t __weak cpu_show_itlb_multihit(struct device *dev,
+ 	return sprintf(buf, "Not affected\n");
+ }
+ 
++ssize_t __weak cpu_show_srbds(struct device *dev,
++			      struct device_attribute *attr, char *buf)
++{
++	return sprintf(buf, "Not affected\n");
++}
++
+ static DEVICE_ATTR(meltdown, 0444, cpu_show_meltdown, NULL);
+ static DEVICE_ATTR(spectre_v1, 0444, cpu_show_spectre_v1, NULL);
+ static DEVICE_ATTR(spectre_v2, 0444, cpu_show_spectre_v2, NULL);
+@@ -575,6 +581,7 @@ static DEVICE_ATTR(l1tf, 0444, cpu_show_l1tf, NULL);
+ static DEVICE_ATTR(mds, 0444, cpu_show_mds, NULL);
+ static DEVICE_ATTR(tsx_async_abort, 0444, cpu_show_tsx_async_abort, NULL);
+ static DEVICE_ATTR(itlb_multihit, 0444, cpu_show_itlb_multihit, NULL);
++static DEVICE_ATTR(srbds, 0444, cpu_show_srbds, NULL);
+ 
+ static struct attribute *cpu_root_vulnerabilities_attrs[] = {
+ 	&dev_attr_meltdown.attr,
+@@ -585,6 +592,7 @@ static struct attribute *cpu_root_vulnerabilities_attrs[] = {
+ 	&dev_attr_mds.attr,
+ 	&dev_attr_tsx_async_abort.attr,
+ 	&dev_attr_itlb_multihit.attr,
++	&dev_attr_srbds.attr,
+ 	NULL
+ };
+ 
+diff --git a/drivers/iio/adc/stm32-adc-core.c b/drivers/iio/adc/stm32-adc-core.c
+index 2df88d2b880a..0e2068ec068b 100644
+--- a/drivers/iio/adc/stm32-adc-core.c
++++ b/drivers/iio/adc/stm32-adc-core.c
+@@ -65,12 +65,14 @@ struct stm32_adc_priv;
+  * @clk_sel:	clock selection routine
+  * @max_clk_rate_hz: maximum analog clock rate (Hz, from datasheet)
+  * @has_syscfg: SYSCFG capability flags
++ * @num_irqs:	number of interrupt lines
+  */
+ struct stm32_adc_priv_cfg {
+ 	const struct stm32_adc_common_regs *regs;
+ 	int (*clk_sel)(struct platform_device *, struct stm32_adc_priv *);
+ 	u32 max_clk_rate_hz;
+ 	unsigned int has_syscfg;
++	unsigned int num_irqs;
+ };
+ 
+ /**
+@@ -375,21 +377,15 @@ static int stm32_adc_irq_probe(struct platform_device *pdev,
+ 	struct device_node *np = pdev->dev.of_node;
+ 	unsigned int i;
+ 
+-	for (i = 0; i < STM32_ADC_MAX_ADCS; i++) {
++	/*
++	 * Interrupt(s) must be provided, depending on the compatible:
++	 * - stm32f4/h7 shares a common interrupt line.
++	 * - stm32mp1, has one line per ADC
++	 */
++	for (i = 0; i < priv->cfg->num_irqs; i++) {
+ 		priv->irq[i] = platform_get_irq(pdev, i);
+-		if (priv->irq[i] < 0) {
+-			/*
+-			 * At least one interrupt must be provided, make others
+-			 * optional:
+-			 * - stm32f4/h7 shares a common interrupt.
+-			 * - stm32mp1, has one line per ADC (either for ADC1,
+-			 *   ADC2 or both).
+-			 */
+-			if (i && priv->irq[i] == -ENXIO)
+-				continue;
+-
++		if (priv->irq[i] < 0)
+ 			return priv->irq[i];
+-		}
+ 	}
+ 
+ 	priv->domain = irq_domain_add_simple(np, STM32_ADC_MAX_ADCS, 0,
+@@ -400,9 +396,7 @@ static int stm32_adc_irq_probe(struct platform_device *pdev,
+ 		return -ENOMEM;
+ 	}
+ 
+-	for (i = 0; i < STM32_ADC_MAX_ADCS; i++) {
+-		if (priv->irq[i] < 0)
+-			continue;
++	for (i = 0; i < priv->cfg->num_irqs; i++) {
+ 		irq_set_chained_handler(priv->irq[i], stm32_adc_irq_handler);
+ 		irq_set_handler_data(priv->irq[i], priv);
+ 	}
+@@ -420,11 +414,8 @@ static void stm32_adc_irq_remove(struct platform_device *pdev,
+ 		irq_dispose_mapping(irq_find_mapping(priv->domain, hwirq));
+ 	irq_domain_remove(priv->domain);
+ 
+-	for (i = 0; i < STM32_ADC_MAX_ADCS; i++) {
+-		if (priv->irq[i] < 0)
+-			continue;
++	for (i = 0; i < priv->cfg->num_irqs; i++)
+ 		irq_set_chained_handler(priv->irq[i], NULL);
+-	}
+ }
+ 
+ static int stm32_adc_core_switches_supply_en(struct stm32_adc_priv *priv,
+@@ -817,6 +808,7 @@ static const struct stm32_adc_priv_cfg stm32f4_adc_priv_cfg = {
+ 	.regs = &stm32f4_adc_common_regs,
+ 	.clk_sel = stm32f4_adc_clk_sel,
+ 	.max_clk_rate_hz = 36000000,
++	.num_irqs = 1,
+ };
+ 
+ static const struct stm32_adc_priv_cfg stm32h7_adc_priv_cfg = {
+@@ -824,6 +816,7 @@ static const struct stm32_adc_priv_cfg stm32h7_adc_priv_cfg = {
+ 	.clk_sel = stm32h7_adc_clk_sel,
+ 	.max_clk_rate_hz = 36000000,
+ 	.has_syscfg = HAS_VBOOSTER,
++	.num_irqs = 1,
+ };
+ 
+ static const struct stm32_adc_priv_cfg stm32mp1_adc_priv_cfg = {
+@@ -831,6 +824,7 @@ static const struct stm32_adc_priv_cfg stm32mp1_adc_priv_cfg = {
+ 	.clk_sel = stm32h7_adc_clk_sel,
+ 	.max_clk_rate_hz = 40000000,
+ 	.has_syscfg = HAS_VBOOSTER | HAS_ANASWVDD,
++	.num_irqs = 2,
+ };
+ 
+ static const struct of_device_id stm32_adc_of_match[] = {
+diff --git a/drivers/iio/chemical/pms7003.c b/drivers/iio/chemical/pms7003.c
+index 23c9ab252470..07bb90d72434 100644
+--- a/drivers/iio/chemical/pms7003.c
++++ b/drivers/iio/chemical/pms7003.c
+@@ -73,6 +73,11 @@ struct pms7003_state {
+ 	struct pms7003_frame frame;
+ 	struct completion frame_ready;
+ 	struct mutex lock; /* must be held whenever state gets touched */
++	/* Used to construct scan to push to the IIO buffer */
++	struct {
++		u16 data[3]; /* PM1, PM2P5, PM10 */
++		s64 ts;
++	} scan;
+ };
+ 
+ static int pms7003_do_cmd(struct pms7003_state *state, enum pms7003_cmd cmd)
+@@ -104,7 +109,6 @@ static irqreturn_t pms7003_trigger_handler(int irq, void *p)
+ 	struct iio_dev *indio_dev = pf->indio_dev;
+ 	struct pms7003_state *state = iio_priv(indio_dev);
+ 	struct pms7003_frame *frame = &state->frame;
+-	u16 data[3 + 1 + 4]; /* PM1, PM2P5, PM10, padding, timestamp */
+ 	int ret;
+ 
+ 	mutex_lock(&state->lock);
+@@ -114,12 +118,15 @@ static irqreturn_t pms7003_trigger_handler(int irq, void *p)
+ 		goto err;
+ 	}
+ 
+-	data[PM1] = pms7003_get_pm(frame->data + PMS7003_PM1_OFFSET);
+-	data[PM2P5] = pms7003_get_pm(frame->data + PMS7003_PM2P5_OFFSET);
+-	data[PM10] = pms7003_get_pm(frame->data + PMS7003_PM10_OFFSET);
++	state->scan.data[PM1] =
++		pms7003_get_pm(frame->data + PMS7003_PM1_OFFSET);
++	state->scan.data[PM2P5] =
++		pms7003_get_pm(frame->data + PMS7003_PM2P5_OFFSET);
++	state->scan.data[PM10] =
++		pms7003_get_pm(frame->data + PMS7003_PM10_OFFSET);
+ 	mutex_unlock(&state->lock);
+ 
+-	iio_push_to_buffers_with_timestamp(indio_dev, data,
++	iio_push_to_buffers_with_timestamp(indio_dev, &state->scan,
+ 					   iio_get_time_ns(indio_dev));
+ err:
+ 	iio_trigger_notify_done(indio_dev->trig);
+diff --git a/drivers/iio/chemical/sps30.c b/drivers/iio/chemical/sps30.c
+index acb9f8ecbb3d..a88c1fb875a0 100644
+--- a/drivers/iio/chemical/sps30.c
++++ b/drivers/iio/chemical/sps30.c
+@@ -230,15 +230,18 @@ static irqreturn_t sps30_trigger_handler(int irq, void *p)
+ 	struct iio_dev *indio_dev = pf->indio_dev;
+ 	struct sps30_state *state = iio_priv(indio_dev);
+ 	int ret;
+-	s32 data[4 + 2]; /* PM1, PM2P5, PM4, PM10, timestamp */
++	struct {
++		s32 data[4]; /* PM1, PM2P5, PM4, PM10 */
++		s64 ts;
++	} scan;
+ 
+ 	mutex_lock(&state->lock);
+-	ret = sps30_do_meas(state, data, 4);
++	ret = sps30_do_meas(state, scan.data, ARRAY_SIZE(scan.data));
+ 	mutex_unlock(&state->lock);
+ 	if (ret)
+ 		goto err;
+ 
+-	iio_push_to_buffers_with_timestamp(indio_dev, data,
++	iio_push_to_buffers_with_timestamp(indio_dev, &scan,
+ 					   iio_get_time_ns(indio_dev));
+ err:
+ 	iio_trigger_notify_done(indio_dev->trig);
+diff --git a/drivers/iio/light/vcnl4000.c b/drivers/iio/light/vcnl4000.c
+index e5b00a6611ac..7384a3ffcac4 100644
+--- a/drivers/iio/light/vcnl4000.c
++++ b/drivers/iio/light/vcnl4000.c
+@@ -193,7 +193,6 @@ static int vcnl4000_measure(struct vcnl4000_data *data, u8 req_mask,
+ 				u8 rdy_mask, u8 data_reg, int *val)
+ {
+ 	int tries = 20;
+-	__be16 buf;
+ 	int ret;
+ 
+ 	mutex_lock(&data->vcnl4000_lock);
+@@ -220,13 +219,12 @@ static int vcnl4000_measure(struct vcnl4000_data *data, u8 req_mask,
+ 		goto fail;
+ 	}
+ 
+-	ret = i2c_smbus_read_i2c_block_data(data->client,
+-		data_reg, sizeof(buf), (u8 *) &buf);
++	ret = i2c_smbus_read_word_swapped(data->client, data_reg);
+ 	if (ret < 0)
+ 		goto fail;
+ 
+ 	mutex_unlock(&data->vcnl4000_lock);
+-	*val = be16_to_cpu(buf);
++	*val = ret;
+ 
+ 	return 0;
+ 
+diff --git a/drivers/net/dsa/ocelot/felix.c b/drivers/net/dsa/ocelot/felix.c
+index b74580e87be8..5d9db8d042c1 100644
+--- a/drivers/net/dsa/ocelot/felix.c
++++ b/drivers/net/dsa/ocelot/felix.c
+@@ -100,13 +100,17 @@ static void felix_vlan_add(struct dsa_switch *ds, int port,
+ 			   const struct switchdev_obj_port_vlan *vlan)
+ {
+ 	struct ocelot *ocelot = ds->priv;
++	u16 flags = vlan->flags;
+ 	u16 vid;
+ 	int err;
+ 
++	if (dsa_is_cpu_port(ds, port))
++		flags &= ~BRIDGE_VLAN_INFO_UNTAGGED;
++
+ 	for (vid = vlan->vid_begin; vid <= vlan->vid_end; vid++) {
+ 		err = ocelot_vlan_add(ocelot, port, vid,
+-				      vlan->flags & BRIDGE_VLAN_INFO_PVID,
+-				      vlan->flags & BRIDGE_VLAN_INFO_UNTAGGED);
++				      flags & BRIDGE_VLAN_INFO_PVID,
++				      flags & BRIDGE_VLAN_INFO_UNTAGGED);
+ 		if (err) {
+ 			dev_err(ds->dev, "Failed to add VLAN %d to port %d: %d\n",
+ 				vid, port, err);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+index 4659c205cc01..46ff83408d05 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+@@ -1824,7 +1824,7 @@ static int mlx5e_flower_parse_meta(struct net_device *filter_dev,
+ 	flow_rule_match_meta(rule, &match);
+ 	if (match.mask->ingress_ifindex != 0xFFFFFFFF) {
+ 		NL_SET_ERR_MSG_MOD(extack, "Unsupported ingress ifindex mask");
+-		return -EINVAL;
++		return -EOPNOTSUPP;
+ 	}
+ 
+ 	ingress_dev = __dev_get_by_index(dev_net(filter_dev),
+@@ -1832,13 +1832,13 @@ static int mlx5e_flower_parse_meta(struct net_device *filter_dev,
+ 	if (!ingress_dev) {
+ 		NL_SET_ERR_MSG_MOD(extack,
+ 				   "Can't find the ingress port to match on");
+-		return -EINVAL;
++		return -ENOENT;
+ 	}
+ 
+ 	if (ingress_dev != filter_dev) {
+ 		NL_SET_ERR_MSG_MOD(extack,
+ 				   "Can't match on the ingress filter port");
+-		return -EINVAL;
++		return -EOPNOTSUPP;
+ 	}
+ 
+ 	return 0;
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+index cf09cfc33234..cdc566768a07 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+@@ -416,12 +416,6 @@ static void del_sw_ns(struct fs_node *node)
+ 
+ static void del_sw_prio(struct fs_node *node)
+ {
+-	struct mlx5_flow_root_namespace *root_ns;
+-	struct mlx5_flow_namespace *ns;
+-
+-	fs_get_obj(ns, node);
+-	root_ns = container_of(ns, struct mlx5_flow_root_namespace, ns);
+-	mutex_destroy(&root_ns->chain_lock);
+ 	kfree(node);
+ }
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+index 4a08e4eef283..20e12e14cfa8 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+@@ -1552,6 +1552,22 @@ static void shutdown(struct pci_dev *pdev)
+ 	mlx5_pci_disable_device(dev);
+ }
+ 
++static int mlx5_suspend(struct pci_dev *pdev, pm_message_t state)
++{
++	struct mlx5_core_dev *dev = pci_get_drvdata(pdev);
++
++	mlx5_unload_one(dev, false);
++
++	return 0;
++}
++
++static int mlx5_resume(struct pci_dev *pdev)
++{
++	struct mlx5_core_dev *dev = pci_get_drvdata(pdev);
++
++	return mlx5_load_one(dev, false);
++}
++
+ static const struct pci_device_id mlx5_core_pci_table[] = {
+ 	{ PCI_VDEVICE(MELLANOX, PCI_DEVICE_ID_MELLANOX_CONNECTIB) },
+ 	{ PCI_VDEVICE(MELLANOX, 0x1012), MLX5_PCI_DEV_IS_VF},	/* Connect-IB VF */
+@@ -1595,6 +1611,8 @@ static struct pci_driver mlx5_core_driver = {
+ 	.id_table       = mlx5_core_pci_table,
+ 	.probe          = init_one,
+ 	.remove         = remove_one,
++	.suspend        = mlx5_suspend,
++	.resume         = mlx5_resume,
+ 	.shutdown	= shutdown,
+ 	.err_handler	= &mlx5_err_handler,
+ 	.sriov_configure   = mlx5_core_sriov_configure,
+diff --git a/drivers/net/ethernet/netronome/nfp/flower/offload.c b/drivers/net/ethernet/netronome/nfp/flower/offload.c
+index 7ca5c1becfcf..c5dcfdd69773 100644
+--- a/drivers/net/ethernet/netronome/nfp/flower/offload.c
++++ b/drivers/net/ethernet/netronome/nfp/flower/offload.c
+@@ -1440,7 +1440,8 @@ __nfp_flower_update_merge_stats(struct nfp_app *app,
+ 		ctx_id = be32_to_cpu(sub_flow->meta.host_ctx_id);
+ 		priv->stats[ctx_id].pkts += pkts;
+ 		priv->stats[ctx_id].bytes += bytes;
+-		max_t(u64, priv->stats[ctx_id].used, used);
++		priv->stats[ctx_id].used = max_t(u64, used,
++						 priv->stats[ctx_id].used);
+ 	}
+ }
+ 
+diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+index d564459290ce..bcb39012d34d 100644
+--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
++++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+@@ -630,7 +630,8 @@ static int stmmac_hwtstamp_set(struct net_device *dev, struct ifreq *ifr)
+ 			config.rx_filter = HWTSTAMP_FILTER_PTP_V2_EVENT;
+ 			ptp_v2 = PTP_TCR_TSVER2ENA;
+ 			snap_type_sel = PTP_TCR_SNAPTYPSEL_1;
+-			ts_event_en = PTP_TCR_TSEVNTENA;
++			if (priv->synopsys_id != DWMAC_CORE_5_10)
++				ts_event_en = PTP_TCR_TSEVNTENA;
+ 			ptp_over_ipv4_udp = PTP_TCR_TSIPV4ENA;
+ 			ptp_over_ipv6_udp = PTP_TCR_TSIPV6ENA;
+ 			ptp_over_ethernet = PTP_TCR_TSIPENA;
+diff --git a/drivers/net/usb/qmi_wwan.c b/drivers/net/usb/qmi_wwan.c
+index 4bb8552a00d3..4a2c7355be63 100644
+--- a/drivers/net/usb/qmi_wwan.c
++++ b/drivers/net/usb/qmi_wwan.c
+@@ -1324,6 +1324,7 @@ static const struct usb_device_id products[] = {
+ 	{QMI_FIXED_INTF(0x1bbb, 0x0203, 2)},	/* Alcatel L800MA */
+ 	{QMI_FIXED_INTF(0x2357, 0x0201, 4)},	/* TP-LINK HSUPA Modem MA180 */
+ 	{QMI_FIXED_INTF(0x2357, 0x9000, 4)},	/* TP-LINK MA260 */
++	{QMI_QUIRK_SET_DTR(0x1bc7, 0x1031, 3)}, /* Telit LE910C1-EUX */
+ 	{QMI_QUIRK_SET_DTR(0x1bc7, 0x1040, 2)},	/* Telit LE922A */
+ 	{QMI_QUIRK_SET_DTR(0x1bc7, 0x1050, 2)},	/* Telit FN980 */
+ 	{QMI_FIXED_INTF(0x1bc7, 0x1100, 3)},	/* Telit ME910 */
+diff --git a/drivers/nfc/st21nfca/dep.c b/drivers/nfc/st21nfca/dep.c
+index 60acdfd1cb8c..856a10c293f8 100644
+--- a/drivers/nfc/st21nfca/dep.c
++++ b/drivers/nfc/st21nfca/dep.c
+@@ -173,8 +173,10 @@ static int st21nfca_tm_send_atr_res(struct nfc_hci_dev *hdev,
+ 		memcpy(atr_res->gbi, atr_req->gbi, gb_len);
+ 		r = nfc_set_remote_general_bytes(hdev->ndev, atr_res->gbi,
+ 						  gb_len);
+-		if (r < 0)
++		if (r < 0) {
++			kfree_skb(skb);
+ 			return r;
++		}
+ 	}
+ 
+ 	info->dep_info.curr_nfc_dep_pni = 0;
+diff --git a/drivers/nvmem/qfprom.c b/drivers/nvmem/qfprom.c
+index d057f1bfb2e9..8a91717600be 100644
+--- a/drivers/nvmem/qfprom.c
++++ b/drivers/nvmem/qfprom.c
+@@ -27,25 +27,11 @@ static int qfprom_reg_read(void *context,
+ 	return 0;
+ }
+ 
+-static int qfprom_reg_write(void *context,
+-			 unsigned int reg, void *_val, size_t bytes)
+-{
+-	struct qfprom_priv *priv = context;
+-	u8 *val = _val;
+-	int i = 0, words = bytes;
+-
+-	while (words--)
+-		writeb(*val++, priv->base + reg + i++);
+-
+-	return 0;
+-}
+-
+ static struct nvmem_config econfig = {
+ 	.name = "qfprom",
+ 	.stride = 1,
+ 	.word_size = 1,
+ 	.reg_read = qfprom_reg_read,
+-	.reg_write = qfprom_reg_write,
+ };
+ 
+ static int qfprom_probe(struct platform_device *pdev)
+diff --git a/drivers/staging/rtl8712/wifi.h b/drivers/staging/rtl8712/wifi.h
+index be731f1a2209..91b65731fcaa 100644
+--- a/drivers/staging/rtl8712/wifi.h
++++ b/drivers/staging/rtl8712/wifi.h
+@@ -440,7 +440,7 @@ static inline unsigned char *get_hdr_bssid(unsigned char *pframe)
+ /* block-ack parameters */
+ #define IEEE80211_ADDBA_PARAM_POLICY_MASK 0x0002
+ #define IEEE80211_ADDBA_PARAM_TID_MASK 0x003C
+-#define IEEE80211_ADDBA_PARAM_BUF_SIZE_MASK 0xFFA0
++#define IEEE80211_ADDBA_PARAM_BUF_SIZE_MASK 0xFFC0
+ #define IEEE80211_DELBA_PARAM_TID_MASK 0xF000
+ #define IEEE80211_DELBA_PARAM_INITIATOR_MASK 0x0800
+ 
+@@ -532,13 +532,6 @@ struct ieee80211_ht_addt_info {
+ #define IEEE80211_HT_IE_NON_GF_STA_PRSNT	0x0004
+ #define IEEE80211_HT_IE_NON_HT_STA_PRSNT	0x0010
+ 
+-/* block-ack parameters */
+-#define IEEE80211_ADDBA_PARAM_POLICY_MASK 0x0002
+-#define IEEE80211_ADDBA_PARAM_TID_MASK 0x003C
+-#define IEEE80211_ADDBA_PARAM_BUF_SIZE_MASK 0xFFA0
+-#define IEEE80211_DELBA_PARAM_TID_MASK 0xF000
+-#define IEEE80211_DELBA_PARAM_INITIATOR_MASK 0x0800
+-
+ /*
+  * A-PMDU buffer sizes
+  * According to IEEE802.11n spec size varies from 8K to 64K (in powers of 2)
+diff --git a/drivers/tty/hvc/hvc_console.c b/drivers/tty/hvc/hvc_console.c
+index 436cc51c92c3..cdcc64ea2554 100644
+--- a/drivers/tty/hvc/hvc_console.c
++++ b/drivers/tty/hvc/hvc_console.c
+@@ -371,15 +371,14 @@ static int hvc_open(struct tty_struct *tty, struct file * filp)
+ 	 * tty fields and return the kref reference.
+ 	 */
+ 	if (rc) {
+-		tty_port_tty_set(&hp->port, NULL);
+-		tty->driver_data = NULL;
+-		tty_port_put(&hp->port);
+ 		printk(KERN_ERR "hvc_open: request_irq failed with rc %d.\n", rc);
+-	} else
++	} else {
+ 		/* We are ready... raise DTR/RTS */
+ 		if (C_BAUD(tty))
+ 			if (hp->ops->dtr_rts)
+ 				hp->ops->dtr_rts(hp, 1);
++		tty_port_set_initialized(&hp->port, true);
++	}
+ 
+ 	/* Force wakeup of the polling thread */
+ 	hvc_kick();
+@@ -389,22 +388,12 @@ static int hvc_open(struct tty_struct *tty, struct file * filp)
+ 
+ static void hvc_close(struct tty_struct *tty, struct file * filp)
+ {
+-	struct hvc_struct *hp;
++	struct hvc_struct *hp = tty->driver_data;
+ 	unsigned long flags;
+ 
+ 	if (tty_hung_up_p(filp))
+ 		return;
+ 
+-	/*
+-	 * No driver_data means that this close was issued after a failed
+-	 * hvc_open by the tty layer's release_dev() function and we can just
+-	 * exit cleanly because the kref reference wasn't made.
+-	 */
+-	if (!tty->driver_data)
+-		return;
+-
+-	hp = tty->driver_data;
+-
+ 	spin_lock_irqsave(&hp->port.lock, flags);
+ 
+ 	if (--hp->port.count == 0) {
+@@ -412,6 +401,9 @@ static void hvc_close(struct tty_struct *tty, struct file * filp)
+ 		/* We are done with the tty pointer now. */
+ 		tty_port_tty_set(&hp->port, NULL);
+ 
++		if (!tty_port_initialized(&hp->port))
++			return;
++
+ 		if (C_HUPCL(tty))
+ 			if (hp->ops->dtr_rts)
+ 				hp->ops->dtr_rts(hp, 0);
+@@ -428,6 +420,7 @@ static void hvc_close(struct tty_struct *tty, struct file * filp)
+ 		 * waking periodically to check chars_in_buffer().
+ 		 */
+ 		tty_wait_until_sent(tty, HVC_CLOSE_WAIT);
++		tty_port_set_initialized(&hp->port, false);
+ 	} else {
+ 		if (hp->port.count < 0)
+ 			printk(KERN_ERR "hvc_close %X: oops, count is %d\n",
+diff --git a/drivers/tty/serial/8250/Kconfig b/drivers/tty/serial/8250/Kconfig
+index f16824bbb573..c9da6c142c6f 100644
+--- a/drivers/tty/serial/8250/Kconfig
++++ b/drivers/tty/serial/8250/Kconfig
+@@ -63,6 +63,7 @@ config SERIAL_8250_PNP
+ config SERIAL_8250_16550A_VARIANTS
+ 	bool "Support for variants of the 16550A serial port"
+ 	depends on SERIAL_8250
++	default !X86
+ 	help
+ 	  The 8250 driver can probe for many variants of the venerable 16550A
+ 	  serial port. Doing so takes additional time at boot.
+diff --git a/drivers/tty/vt/keyboard.c b/drivers/tty/vt/keyboard.c
+index 15d33fa0c925..568b2171f335 100644
+--- a/drivers/tty/vt/keyboard.c
++++ b/drivers/tty/vt/keyboard.c
+@@ -127,7 +127,11 @@ static DEFINE_SPINLOCK(func_buf_lock); /* guard 'func_buf'  and friends */
+ static unsigned long key_down[BITS_TO_LONGS(KEY_CNT)];	/* keyboard key bitmap */
+ static unsigned char shift_down[NR_SHIFT];		/* shift state counters.. */
+ static bool dead_key_next;
+-static int npadch = -1;					/* -1 or number assembled on pad */
++
++/* Handles a number being assembled on the number pad */
++static bool npadch_active;
++static unsigned int npadch_value;
++
+ static unsigned int diacr;
+ static char rep;					/* flag telling character repeat */
+ 
+@@ -845,12 +849,12 @@ static void k_shift(struct vc_data *vc, unsigned char value, char up_flag)
+ 		shift_state &= ~(1 << value);
+ 
+ 	/* kludge */
+-	if (up_flag && shift_state != old_state && npadch != -1) {
++	if (up_flag && shift_state != old_state && npadch_active) {
+ 		if (kbd->kbdmode == VC_UNICODE)
+-			to_utf8(vc, npadch);
++			to_utf8(vc, npadch_value);
+ 		else
+-			put_queue(vc, npadch & 0xff);
+-		npadch = -1;
++			put_queue(vc, npadch_value & 0xff);
++		npadch_active = false;
+ 	}
+ }
+ 
+@@ -868,7 +872,7 @@ static void k_meta(struct vc_data *vc, unsigned char value, char up_flag)
+ 
+ static void k_ascii(struct vc_data *vc, unsigned char value, char up_flag)
+ {
+-	int base;
++	unsigned int base;
+ 
+ 	if (up_flag)
+ 		return;
+@@ -882,10 +886,12 @@ static void k_ascii(struct vc_data *vc, unsigned char value, char up_flag)
+ 		base = 16;
+ 	}
+ 
+-	if (npadch == -1)
+-		npadch = value;
+-	else
+-		npadch = npadch * base + value;
++	if (!npadch_active) {
++		npadch_value = 0;
++		npadch_active = true;
++	}
++
++	npadch_value = npadch_value * base + value;
+ }
+ 
+ static void k_lock(struct vc_data *vc, unsigned char value, char up_flag)
+diff --git a/drivers/usb/class/cdc-acm.c b/drivers/usb/class/cdc-acm.c
+index 8ca72d80501d..f67088bb8218 100644
+--- a/drivers/usb/class/cdc-acm.c
++++ b/drivers/usb/class/cdc-acm.c
+@@ -584,7 +584,7 @@ static void acm_softint(struct work_struct *work)
+ 	}
+ 
+ 	if (test_and_clear_bit(ACM_ERROR_DELAY, &acm->flags)) {
+-		for (i = 0; i < ACM_NR; i++)
++		for (i = 0; i < acm->rx_buflimit; i++)
+ 			if (test_and_clear_bit(i, &acm->urbs_in_error_delay))
+ 					acm_submit_read_urb(acm, i, GFP_NOIO);
+ 	}
+diff --git a/drivers/usb/musb/musb_core.c b/drivers/usb/musb/musb_core.c
+index f616fb489542..f38d24fff166 100644
+--- a/drivers/usb/musb/musb_core.c
++++ b/drivers/usb/musb/musb_core.c
+@@ -2877,6 +2877,13 @@ static int musb_resume(struct device *dev)
+ 	musb_enable_interrupts(musb);
+ 	musb_platform_enable(musb);
+ 
++	/* session might be disabled in suspend */
++	if (musb->port_mode == MUSB_HOST &&
++	    !(musb->ops->quirks & MUSB_PRESERVE_SESSION)) {
++		devctl |= MUSB_DEVCTL_SESSION;
++		musb_writeb(musb->mregs, MUSB_DEVCTL, devctl);
++	}
++
+ 	spin_lock_irqsave(&musb->lock, flags);
+ 	error = musb_run_resume_work(musb);
+ 	if (error)
+diff --git a/drivers/usb/musb/musb_debugfs.c b/drivers/usb/musb/musb_debugfs.c
+index 7b6281ab62ed..30a89aa8a3e7 100644
+--- a/drivers/usb/musb/musb_debugfs.c
++++ b/drivers/usb/musb/musb_debugfs.c
+@@ -168,6 +168,11 @@ static ssize_t musb_test_mode_write(struct file *file,
+ 	u8			test;
+ 	char			buf[24];
+ 
++	memset(buf, 0x00, sizeof(buf));
++
++	if (copy_from_user(buf, ubuf, min_t(size_t, sizeof(buf) - 1, count)))
++		return -EFAULT;
++
+ 	pm_runtime_get_sync(musb->controller);
+ 	test = musb_readb(musb->mregs, MUSB_TESTMODE);
+ 	if (test) {
+@@ -176,11 +181,6 @@ static ssize_t musb_test_mode_write(struct file *file,
+ 		goto ret;
+ 	}
+ 
+-	memset(buf, 0x00, sizeof(buf));
+-
+-	if (copy_from_user(buf, ubuf, min_t(size_t, sizeof(buf) - 1, count)))
+-		return -EFAULT;
+-
+ 	if (strstarts(buf, "force host full-speed"))
+ 		test = MUSB_TEST_FORCE_HOST | MUSB_TEST_FORCE_FS;
+ 
+diff --git a/drivers/usb/serial/ch341.c b/drivers/usb/serial/ch341.c
+index c5ecdcd51ffc..89675ee29645 100644
+--- a/drivers/usb/serial/ch341.c
++++ b/drivers/usb/serial/ch341.c
+@@ -73,6 +73,8 @@
+ #define CH341_LCR_CS6          0x01
+ #define CH341_LCR_CS5          0x00
+ 
++#define CH341_QUIRK_LIMITED_PRESCALER	BIT(0)
++
+ static const struct usb_device_id id_table[] = {
+ 	{ USB_DEVICE(0x4348, 0x5523) },
+ 	{ USB_DEVICE(0x1a86, 0x7523) },
+@@ -87,6 +89,7 @@ struct ch341_private {
+ 	u8 mcr;
+ 	u8 msr;
+ 	u8 lcr;
++	unsigned long quirks;
+ };
+ 
+ static void ch341_set_termios(struct tty_struct *tty,
+@@ -159,9 +162,11 @@ static const speed_t ch341_min_rates[] = {
+  *		2 <= div <= 256 if fact = 0, or
+  *		9 <= div <= 256 if fact = 1
+  */
+-static int ch341_get_divisor(speed_t speed)
++static int ch341_get_divisor(struct ch341_private *priv)
+ {
+ 	unsigned int fact, div, clk_div;
++	speed_t speed = priv->baud_rate;
++	bool force_fact0 = false;
+ 	int ps;
+ 
+ 	/*
+@@ -187,8 +192,12 @@ static int ch341_get_divisor(speed_t speed)
+ 	clk_div = CH341_CLK_DIV(ps, fact);
+ 	div = CH341_CLKRATE / (clk_div * speed);
+ 
++	/* Some devices require a lower base clock if ps < 3. */
++	if (ps < 3 && (priv->quirks & CH341_QUIRK_LIMITED_PRESCALER))
++		force_fact0 = true;
++
+ 	/* Halve base clock (fact = 0) if required. */
+-	if (div < 9 || div > 255) {
++	if (div < 9 || div > 255 || force_fact0) {
+ 		div /= 2;
+ 		clk_div *= 2;
+ 		fact = 0;
+@@ -227,7 +236,7 @@ static int ch341_set_baudrate_lcr(struct usb_device *dev,
+ 	if (!priv->baud_rate)
+ 		return -EINVAL;
+ 
+-	val = ch341_get_divisor(priv->baud_rate);
++	val = ch341_get_divisor(priv);
+ 	if (val < 0)
+ 		return -EINVAL;
+ 
+@@ -308,6 +317,54 @@ out:	kfree(buffer);
+ 	return r;
+ }
+ 
++static int ch341_detect_quirks(struct usb_serial_port *port)
++{
++	struct ch341_private *priv = usb_get_serial_port_data(port);
++	struct usb_device *udev = port->serial->dev;
++	const unsigned int size = 2;
++	unsigned long quirks = 0;
++	char *buffer;
++	int r;
++
++	buffer = kmalloc(size, GFP_KERNEL);
++	if (!buffer)
++		return -ENOMEM;
++
++	/*
++	 * A subset of CH34x devices does not support all features. The
++	 * prescaler is limited and there is no support for sending a RS232
++	 * break condition. A read failure when trying to set up the latter is
++	 * used to detect these devices.
++	 */
++	r = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0), CH341_REQ_READ_REG,
++			    USB_TYPE_VENDOR | USB_RECIP_DEVICE | USB_DIR_IN,
++			    CH341_REG_BREAK, 0, buffer, size, DEFAULT_TIMEOUT);
++	if (r == -EPIPE) {
++		dev_dbg(&port->dev, "break control not supported\n");
++		quirks = CH341_QUIRK_LIMITED_PRESCALER;
++		r = 0;
++		goto out;
++	}
++
++	if (r != size) {
++		if (r >= 0)
++			r = -EIO;
++		dev_err(&port->dev, "failed to read break control: %d\n", r);
++		goto out;
++	}
++
++	r = 0;
++out:
++	kfree(buffer);
++
++	if (quirks) {
++		dev_dbg(&port->dev, "enabling quirk flags: 0x%02lx\n", quirks);
++		priv->quirks |= quirks;
++	}
++
++	return r;
++}
++
+ static int ch341_port_probe(struct usb_serial_port *port)
+ {
+ 	struct ch341_private *priv;
+@@ -330,6 +387,11 @@ static int ch341_port_probe(struct usb_serial_port *port)
+ 		goto error;
+ 
+ 	usb_set_serial_port_data(port, priv);
++
++	r = ch341_detect_quirks(port);
++	if (r < 0)
++		goto error;
++
+ 	return 0;
+ 
+ error:	kfree(priv);
+diff --git a/drivers/usb/serial/option.c b/drivers/usb/serial/option.c
+index 8bfffca3e4ae..254a8bbeea67 100644
+--- a/drivers/usb/serial/option.c
++++ b/drivers/usb/serial/option.c
+@@ -1157,6 +1157,10 @@ static const struct usb_device_id option_ids[] = {
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_CC864_SINGLE) },
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_DE910_DUAL) },
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_UE910_V2) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1031, 0xff),	/* Telit LE910C1-EUX */
++	 .driver_info = NCTRL(0) | RSVD(3) },
++	{ USB_DEVICE_INTERFACE_CLASS(TELIT_VENDOR_ID, 0x1033, 0xff),	/* Telit LE910C1-EUX (ECM) */
++	 .driver_info = NCTRL(0) },
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE922_USBCFG0),
+ 	  .driver_info = RSVD(0) | RSVD(1) | NCTRL(2) | RSVD(3) },
+ 	{ USB_DEVICE(TELIT_VENDOR_ID, TELIT_PRODUCT_LE922_USBCFG1),
+diff --git a/drivers/usb/serial/qcserial.c b/drivers/usb/serial/qcserial.c
+index ce0401d3137f..d147feae83e6 100644
+--- a/drivers/usb/serial/qcserial.c
++++ b/drivers/usb/serial/qcserial.c
+@@ -173,6 +173,7 @@ static const struct usb_device_id id_table[] = {
+ 	{DEVICE_SWI(0x413c, 0x81b3)},	/* Dell Wireless 5809e Gobi(TM) 4G LTE Mobile Broadband Card (rev3) */
+ 	{DEVICE_SWI(0x413c, 0x81b5)},	/* Dell Wireless 5811e QDL */
+ 	{DEVICE_SWI(0x413c, 0x81b6)},	/* Dell Wireless 5811e QDL */
++	{DEVICE_SWI(0x413c, 0x81cb)},	/* Dell Wireless 5816e QDL */
+ 	{DEVICE_SWI(0x413c, 0x81cc)},	/* Dell Wireless 5816e */
+ 	{DEVICE_SWI(0x413c, 0x81cf)},   /* Dell Wireless 5819 */
+ 	{DEVICE_SWI(0x413c, 0x81d0)},   /* Dell Wireless 5819 */
+diff --git a/drivers/usb/serial/usb_wwan.c b/drivers/usb/serial/usb_wwan.c
+index 13be21aad2f4..4b9845807bee 100644
+--- a/drivers/usb/serial/usb_wwan.c
++++ b/drivers/usb/serial/usb_wwan.c
+@@ -270,6 +270,10 @@ static void usb_wwan_indat_callback(struct urb *urb)
+ 	if (status) {
+ 		dev_dbg(dev, "%s: nonzero status: %d on endpoint %02x.\n",
+ 			__func__, status, endpoint);
++
++		/* don't resubmit on fatal errors */
++		if (status == -ESHUTDOWN || status == -ENOENT)
++			return;
+ 	} else {
+ 		if (urb->actual_length) {
+ 			tty_insert_flip_string(&port->port, data,
+diff --git a/include/linux/mod_devicetable.h b/include/linux/mod_devicetable.h
+index e3596db077dc..953d7ca01eb6 100644
+--- a/include/linux/mod_devicetable.h
++++ b/include/linux/mod_devicetable.h
+@@ -657,6 +657,10 @@ struct mips_cdmm_device_id {
+ /*
+  * MODULE_DEVICE_TABLE expects this struct to be called x86cpu_device_id.
+  * Although gcc seems to ignore this error, clang fails without this define.
++ *
++ * Note: The ordering of the struct is different from upstream because the
++ * static initializers in kernels < 5.7 still use C89 style while upstream
++ * has been converted to proper C99 initializers.
+  */
+ #define x86cpu_device_id x86_cpu_id
+ struct x86_cpu_id {
+@@ -665,6 +669,7 @@ struct x86_cpu_id {
+ 	__u16 model;
+ 	__u16 feature;	/* bit index */
+ 	kernel_ulong_t driver_data;
++	__u16 steppings;
+ };
+ 
+ #define X86_FEATURE_MATCH(x) \
+@@ -673,6 +678,7 @@ struct x86_cpu_id {
+ #define X86_VENDOR_ANY 0xffff
+ #define X86_FAMILY_ANY 0
+ #define X86_MODEL_ANY  0
++#define X86_STEPPING_ANY 0
+ #define X86_FEATURE_ANY 0	/* Same as FPU, you can't test for that */
+ 
+ /*
+diff --git a/include/linux/virtio_net.h b/include/linux/virtio_net.h
+index 6f6ade63b04c..e8a924eeea3d 100644
+--- a/include/linux/virtio_net.h
++++ b/include/linux/virtio_net.h
+@@ -31,6 +31,7 @@ static inline int virtio_net_hdr_to_skb(struct sk_buff *skb,
+ {
+ 	unsigned int gso_type = 0;
+ 	unsigned int thlen = 0;
++	unsigned int p_off = 0;
+ 	unsigned int ip_proto;
+ 
+ 	if (hdr->gso_type != VIRTIO_NET_HDR_GSO_NONE) {
+@@ -68,7 +69,8 @@ static inline int virtio_net_hdr_to_skb(struct sk_buff *skb,
+ 		if (!skb_partial_csum_set(skb, start, off))
+ 			return -EINVAL;
+ 
+-		if (skb_transport_offset(skb) + thlen > skb_headlen(skb))
++		p_off = skb_transport_offset(skb) + thlen;
++		if (p_off > skb_headlen(skb))
+ 			return -EINVAL;
+ 	} else {
+ 		/* gso packets without NEEDS_CSUM do not set transport_offset.
+@@ -92,23 +94,32 @@ retry:
+ 				return -EINVAL;
+ 			}
+ 
+-			if (keys.control.thoff + thlen > skb_headlen(skb) ||
++			p_off = keys.control.thoff + thlen;
++			if (p_off > skb_headlen(skb) ||
+ 			    keys.basic.ip_proto != ip_proto)
+ 				return -EINVAL;
+ 
+ 			skb_set_transport_header(skb, keys.control.thoff);
++		} else if (gso_type) {
++			p_off = thlen;
++			if (p_off > skb_headlen(skb))
++				return -EINVAL;
+ 		}
+ 	}
+ 
+ 	if (hdr->gso_type != VIRTIO_NET_HDR_GSO_NONE) {
+ 		u16 gso_size = __virtio16_to_cpu(little_endian, hdr->gso_size);
++		struct skb_shared_info *shinfo = skb_shinfo(skb);
+ 
+-		skb_shinfo(skb)->gso_size = gso_size;
+-		skb_shinfo(skb)->gso_type = gso_type;
++		/* Too small packets are not really GSO ones. */
++		if (skb->len - p_off > gso_size) {
++			shinfo->gso_size = gso_size;
++			shinfo->gso_type = gso_type;
+ 
+-		/* Header must be checked, and gso_segs computed. */
+-		skb_shinfo(skb)->gso_type |= SKB_GSO_DODGY;
+-		skb_shinfo(skb)->gso_segs = 0;
++			/* Header must be checked, and gso_segs computed. */
++			shinfo->gso_type |= SKB_GSO_DODGY;
++			shinfo->gso_segs = 0;
++		}
+ 	}
+ 
+ 	return 0;
+diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
+index ece7e13f6e4a..cc2095607c74 100644
+--- a/kernel/events/uprobes.c
++++ b/kernel/events/uprobes.c
+@@ -867,10 +867,6 @@ static int prepare_uprobe(struct uprobe *uprobe, struct file *file,
+ 	if (ret)
+ 		goto out;
+ 
+-	/* uprobe_write_opcode() assumes we don't cross page boundary */
+-	BUG_ON((uprobe->offset & ~PAGE_MASK) +
+-			UPROBE_SWBP_INSN_SIZE > PAGE_SIZE);
+-
+ 	smp_wmb(); /* pairs with the smp_rmb() in handle_swbp() */
+ 	set_bit(UPROBE_COPY_INSN, &uprobe->flags);
+ 
+@@ -1166,6 +1162,15 @@ static int __uprobe_register(struct inode *inode, loff_t offset,
+ 	if (offset > i_size_read(inode))
+ 		return -EINVAL;
+ 
++	/*
++	 * This ensures that copy_from_page(), copy_to_page() and
++	 * __update_ref_ctr() can't cross page boundary.
++	 */
++	if (!IS_ALIGNED(offset, UPROBE_SWBP_INSN_SIZE))
++		return -EINVAL;
++	if (!IS_ALIGNED(ref_ctr_offset, sizeof(short)))
++		return -EINVAL;
++
+  retry:
+ 	uprobe = alloc_uprobe(inode, offset, ref_ctr_offset);
+ 	if (!uprobe)
+@@ -2014,6 +2019,9 @@ static int is_trap_at_addr(struct mm_struct *mm, unsigned long vaddr)
+ 	uprobe_opcode_t opcode;
+ 	int result;
+ 
++	if (WARN_ON_ONCE(!IS_ALIGNED(vaddr, UPROBE_SWBP_INSN_SIZE)))
++		return -EINVAL;
++
+ 	pagefault_disable();
+ 	result = __get_user(opcode, (uprobe_opcode_t __user *)vaddr);
+ 	pagefault_enable();
+diff --git a/net/ipv4/devinet.c b/net/ipv4/devinet.c
+index 458dc6eb5a68..a27d034c85cc 100644
+--- a/net/ipv4/devinet.c
++++ b/net/ipv4/devinet.c
+@@ -276,6 +276,7 @@ static struct in_device *inetdev_init(struct net_device *dev)
+ 	err = devinet_sysctl_register(in_dev);
+ 	if (err) {
+ 		in_dev->dead = 1;
++		neigh_parms_release(&arp_tbl, in_dev->arp_parms);
+ 		in_dev_put(in_dev);
+ 		in_dev = NULL;
+ 		goto out;
+diff --git a/net/l2tp/l2tp_core.c b/net/l2tp/l2tp_core.c
+index fcb53ed1c4fb..6d7ef78c88af 100644
+--- a/net/l2tp/l2tp_core.c
++++ b/net/l2tp/l2tp_core.c
+@@ -1458,6 +1458,9 @@ static int l2tp_validate_socket(const struct sock *sk, const struct net *net,
+ 	if (sk->sk_type != SOCK_DGRAM)
+ 		return -EPROTONOSUPPORT;
+ 
++	if (sk->sk_family != PF_INET && sk->sk_family != PF_INET6)
++		return -EPROTONOSUPPORT;
++
+ 	if ((encap == L2TP_ENCAPTYPE_UDP && sk->sk_protocol != IPPROTO_UDP) ||
+ 	    (encap == L2TP_ENCAPTYPE_IP && sk->sk_protocol != IPPROTO_L2TP))
+ 		return -EPROTONOSUPPORT;
+diff --git a/net/l2tp/l2tp_ip.c b/net/l2tp/l2tp_ip.c
+index 0d7c887a2b75..955662a6dee7 100644
+--- a/net/l2tp/l2tp_ip.c
++++ b/net/l2tp/l2tp_ip.c
+@@ -20,7 +20,6 @@
+ #include <net/icmp.h>
+ #include <net/udp.h>
+ #include <net/inet_common.h>
+-#include <net/inet_hashtables.h>
+ #include <net/tcp_states.h>
+ #include <net/protocol.h>
+ #include <net/xfrm.h>
+@@ -209,15 +208,31 @@ discard:
+ 	return 0;
+ }
+ 
+-static int l2tp_ip_open(struct sock *sk)
++static int l2tp_ip_hash(struct sock *sk)
+ {
+-	/* Prevent autobind. We don't have ports. */
+-	inet_sk(sk)->inet_num = IPPROTO_L2TP;
++	if (sk_unhashed(sk)) {
++		write_lock_bh(&l2tp_ip_lock);
++		sk_add_node(sk, &l2tp_ip_table);
++		write_unlock_bh(&l2tp_ip_lock);
++	}
++	return 0;
++}
+ 
++static void l2tp_ip_unhash(struct sock *sk)
++{
++	if (sk_unhashed(sk))
++		return;
+ 	write_lock_bh(&l2tp_ip_lock);
+-	sk_add_node(sk, &l2tp_ip_table);
++	sk_del_node_init(sk);
+ 	write_unlock_bh(&l2tp_ip_lock);
++}
++
++static int l2tp_ip_open(struct sock *sk)
++{
++	/* Prevent autobind. We don't have ports. */
++	inet_sk(sk)->inet_num = IPPROTO_L2TP;
+ 
++	l2tp_ip_hash(sk);
+ 	return 0;
+ }
+ 
+@@ -594,8 +609,8 @@ static struct proto l2tp_ip_prot = {
+ 	.sendmsg	   = l2tp_ip_sendmsg,
+ 	.recvmsg	   = l2tp_ip_recvmsg,
+ 	.backlog_rcv	   = l2tp_ip_backlog_recv,
+-	.hash		   = inet_hash,
+-	.unhash		   = inet_unhash,
++	.hash		   = l2tp_ip_hash,
++	.unhash		   = l2tp_ip_unhash,
+ 	.obj_size	   = sizeof(struct l2tp_ip_sock),
+ #ifdef CONFIG_COMPAT
+ 	.compat_setsockopt = compat_ip_setsockopt,
+diff --git a/net/l2tp/l2tp_ip6.c b/net/l2tp/l2tp_ip6.c
+index d148766f40d1..0fa694bd3f6a 100644
+--- a/net/l2tp/l2tp_ip6.c
++++ b/net/l2tp/l2tp_ip6.c
+@@ -20,8 +20,6 @@
+ #include <net/icmp.h>
+ #include <net/udp.h>
+ #include <net/inet_common.h>
+-#include <net/inet_hashtables.h>
+-#include <net/inet6_hashtables.h>
+ #include <net/tcp_states.h>
+ #include <net/protocol.h>
+ #include <net/xfrm.h>
+@@ -222,15 +220,31 @@ discard:
+ 	return 0;
+ }
+ 
+-static int l2tp_ip6_open(struct sock *sk)
++static int l2tp_ip6_hash(struct sock *sk)
+ {
+-	/* Prevent autobind. We don't have ports. */
+-	inet_sk(sk)->inet_num = IPPROTO_L2TP;
++	if (sk_unhashed(sk)) {
++		write_lock_bh(&l2tp_ip6_lock);
++		sk_add_node(sk, &l2tp_ip6_table);
++		write_unlock_bh(&l2tp_ip6_lock);
++	}
++	return 0;
++}
+ 
++static void l2tp_ip6_unhash(struct sock *sk)
++{
++	if (sk_unhashed(sk))
++		return;
+ 	write_lock_bh(&l2tp_ip6_lock);
+-	sk_add_node(sk, &l2tp_ip6_table);
++	sk_del_node_init(sk);
+ 	write_unlock_bh(&l2tp_ip6_lock);
++}
++
++static int l2tp_ip6_open(struct sock *sk)
++{
++	/* Prevent autobind. We don't have ports. */
++	inet_sk(sk)->inet_num = IPPROTO_L2TP;
+ 
++	l2tp_ip6_hash(sk);
+ 	return 0;
+ }
+ 
+@@ -728,8 +742,8 @@ static struct proto l2tp_ip6_prot = {
+ 	.sendmsg	   = l2tp_ip6_sendmsg,
+ 	.recvmsg	   = l2tp_ip6_recvmsg,
+ 	.backlog_rcv	   = l2tp_ip6_backlog_recv,
+-	.hash		   = inet6_hash,
+-	.unhash		   = inet_unhash,
++	.hash		   = l2tp_ip6_hash,
++	.unhash		   = l2tp_ip6_unhash,
+ 	.obj_size	   = sizeof(struct l2tp_ip6_sock),
+ #ifdef CONFIG_COMPAT
+ 	.compat_setsockopt = compat_ipv6_setsockopt,
+diff --git a/net/mptcp/protocol.c b/net/mptcp/protocol.c
+index 3c19a8efdcea..ddeb840acd29 100644
+--- a/net/mptcp/protocol.c
++++ b/net/mptcp/protocol.c
+@@ -920,6 +920,14 @@ static int mptcp_stream_connect(struct socket *sock, struct sockaddr *uaddr,
+ 	int err;
+ 
+ 	lock_sock(sock->sk);
++	if (sock->state != SS_UNCONNECTED && msk->subflow) {
++		/* pending connection or invalid state, let existing subflow
++		 * cope with that
++		 */
++		ssock = msk->subflow;
++		goto do_connect;
++	}
++
+ 	ssock = __mptcp_socket_create(msk, TCP_SYN_SENT);
+ 	if (IS_ERR(ssock)) {
+ 		err = PTR_ERR(ssock);
+@@ -934,9 +942,17 @@ static int mptcp_stream_connect(struct socket *sock, struct sockaddr *uaddr,
+ 		mptcp_subflow_ctx(ssock->sk)->request_mptcp = 0;
+ #endif
+ 
++do_connect:
+ 	err = ssock->ops->connect(ssock, uaddr, addr_len, flags);
+-	inet_sk_state_store(sock->sk, inet_sk_state_load(ssock->sk));
+-	mptcp_copy_inaddrs(sock->sk, ssock->sk);
++	sock->state = ssock->state;
++
++	/* on successful connect, the msk state will be moved to established by
++	 * subflow_finish_connect()
++	 */
++	if (!err || err == EINPROGRESS)
++		mptcp_copy_inaddrs(sock->sk, ssock->sk);
++	else
++		inet_sk_state_store(sock->sk, inet_sk_state_load(ssock->sk));
+ 
+ unlock:
+ 	release_sock(sock->sk);
+diff --git a/net/sched/sch_fq_pie.c b/net/sched/sch_fq_pie.c
+index 214657eb3dfd..6675ec591356 100644
+--- a/net/sched/sch_fq_pie.c
++++ b/net/sched/sch_fq_pie.c
+@@ -298,9 +298,9 @@ static int fq_pie_change(struct Qdisc *sch, struct nlattr *opt,
+ 			goto flow_error;
+ 		}
+ 		q->flows_cnt = nla_get_u32(tb[TCA_FQ_PIE_FLOWS]);
+-		if (!q->flows_cnt || q->flows_cnt > 65536) {
++		if (!q->flows_cnt || q->flows_cnt >= 65536) {
+ 			NL_SET_ERR_MSG_MOD(extack,
+-					   "Number of flows must be < 65536");
++					   "Number of flows must range in [1..65535]");
+ 			goto flow_error;
+ 		}
+ 	}
+diff --git a/net/sctp/ulpevent.c b/net/sctp/ulpevent.c
+index c82dbdcf13f2..77d5c36a8991 100644
+--- a/net/sctp/ulpevent.c
++++ b/net/sctp/ulpevent.c
+@@ -343,6 +343,9 @@ void sctp_ulpevent_nofity_peer_addr_change(struct sctp_transport *transport,
+ 	struct sockaddr_storage addr;
+ 	struct sctp_ulpevent *event;
+ 
++	if (asoc->state < SCTP_STATE_ESTABLISHED)
++		return;
++
+ 	memset(&addr, 0, sizeof(struct sockaddr_storage));
+ 	memcpy(&addr, &transport->ipaddr, transport->af_specific->sockaddr_len);
+ 
+diff --git a/net/vmw_vsock/af_vsock.c b/net/vmw_vsock/af_vsock.c
+index a5f28708e0e7..626bf9044418 100644
+--- a/net/vmw_vsock/af_vsock.c
++++ b/net/vmw_vsock/af_vsock.c
+@@ -1408,7 +1408,7 @@ static int vsock_accept(struct socket *sock, struct socket *newsock, int flags,
+ 	/* Wait for children sockets to appear; these are the new sockets
+ 	 * created upon connection establishment.
+ 	 */
+-	timeout = sock_sndtimeo(listener, flags & O_NONBLOCK);
++	timeout = sock_rcvtimeo(listener, flags & O_NONBLOCK);
+ 	prepare_to_wait(sk_sleep(listener), &wait, TASK_INTERRUPTIBLE);
+ 
+ 	while ((connected = vsock_dequeue_accept(listener)) == NULL &&
+diff --git a/net/vmw_vsock/virtio_transport_common.c b/net/vmw_vsock/virtio_transport_common.c
+index f3c4bab2f737..cfab9403a9c4 100644
+--- a/net/vmw_vsock/virtio_transport_common.c
++++ b/net/vmw_vsock/virtio_transport_common.c
+@@ -1128,6 +1128,14 @@ void virtio_transport_recv_pkt(struct virtio_transport *t,
+ 
+ 	lock_sock(sk);
+ 
++	/* Check if sk has been released before lock_sock */
++	if (sk->sk_shutdown == SHUTDOWN_MASK) {
++		(void)virtio_transport_reset_no_sock(t, pkt);
++		release_sock(sk);
++		sock_put(sk);
++		goto free_pkt;
++	}
++
+ 	/* Update CID in case it has changed after a transport reset event */
+ 	vsk->local_addr.svm_cid = dst.svm_cid;
+ 
+diff --git a/tools/testing/selftests/tc-testing/tc-tests/qdiscs/fq_pie.json b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/fq_pie.json
+new file mode 100644
+index 000000000000..1cda2e11b3ad
+--- /dev/null
++++ b/tools/testing/selftests/tc-testing/tc-tests/qdiscs/fq_pie.json
+@@ -0,0 +1,21 @@
++[
++    {
++        "id": "83be",
++        "name": "Create FQ-PIE with invalid number of flows",
++        "category": [
++            "qdisc",
++            "fq_pie"
++        ],
++        "setup": [
++            "$IP link add dev $DUMMY type dummy || /bin/true"
++        ],
++        "cmdUnderTest": "$TC qdisc add dev $DUMMY root fq_pie flows 65536",
++        "expExitCode": "2",
++        "verifyCmd": "$TC qdisc show dev $DUMMY",
++        "matchPattern": "qdisc",
++        "matchCount": "0",
++        "teardown": [
++            "$IP link del dev $DUMMY"
++        ]
++    }
++]


^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [gentoo-commits] proj/linux-patches:5.6 commit in: /
@ 2020-06-17 16:41 Mike Pagano
  0 siblings, 0 replies; 30+ messages in thread
From: Mike Pagano @ 2020-06-17 16:41 UTC (permalink / raw
  To: gentoo-commits

commit:     0ac4bed9e42b4d0585db323da6203141d38adbc5
Author:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
AuthorDate: Wed Jun 17 16:41:08 2020 +0000
Commit:     Mike Pagano <mpagano <AT> gentoo <DOT> org>
CommitDate: Wed Jun 17 16:41:08 2020 +0000
URL:        https://gitweb.gentoo.org/proj/linux-patches.git/commit/?id=0ac4bed9

Linux patch 5.6.19

Signed-off-by: Mike Pagano <mpagano <AT> gentoo.org>

 0000_README             |    4 +
 1018_linux-5.6.19.patch | 5871 +++++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 5875 insertions(+)

diff --git a/0000_README b/0000_README
index fd785d4..f3eae12 100644
--- a/0000_README
+++ b/0000_README
@@ -115,6 +115,10 @@ Patch:  1017_linux-5.6.18.patch
 From:   http://www.kernel.org
 Desc:   Linux 5.6.18
 
+Patch:  1018_linux-5.6.19.patch
+From:   http://www.kernel.org
+Desc:   Linux 5.6.19
+
 Patch:  1500_XATTR_USER_PREFIX.patch
 From:   https://bugs.gentoo.org/show_bug.cgi?id=470644
 Desc:   Support for namespace user.pax.* on tmpfs.

diff --git a/1018_linux-5.6.19.patch b/1018_linux-5.6.19.patch
new file mode 100644
index 0000000..db84ad4
--- /dev/null
+++ b/1018_linux-5.6.19.patch
@@ -0,0 +1,5871 @@
+diff --git a/Documentation/lzo.txt b/Documentation/lzo.txt
+index ca983328976b..f65b51523014 100644
+--- a/Documentation/lzo.txt
++++ b/Documentation/lzo.txt
+@@ -159,11 +159,15 @@ Byte sequences
+            distance = 16384 + (H << 14) + D
+            state = S (copy S literals after this block)
+            End of stream is reached if distance == 16384
++           In version 1 only, to prevent ambiguity with the RLE case when
++           ((distance & 0x803f) == 0x803f) && (261 <= length <= 264), the
++           compressor must not emit block copies where distance and length
++           meet these conditions.
+ 
+         In version 1 only, this instruction is also used to encode a run of
+-        zeros if distance = 0xbfff, i.e. H = 1 and the D bits are all 1.
++           zeros if distance = 0xbfff, i.e. H = 1 and the D bits are all 1.
+            In this case, it is followed by a fourth byte, X.
+-           run length = ((X << 3) | (0 0 0 0 0 L L L)) + 4.
++           run length = ((X << 3) | (0 0 0 0 0 L L L)) + 4
+ 
+       0 0 1 L L L L L  (32..63)
+            Copy of small block within 16kB distance (preferably less than 34B)
+diff --git a/Makefile b/Makefile
+index 2948731a235c..f927a4fc7fae 100644
+--- a/Makefile
++++ b/Makefile
+@@ -1,7 +1,7 @@
+ # SPDX-License-Identifier: GPL-2.0
+ VERSION = 5
+ PATCHLEVEL = 6
+-SUBLEVEL = 18
++SUBLEVEL = 19
+ EXTRAVERSION =
+ NAME = Kleptomaniac Octopus
+ 
+diff --git a/arch/arm/boot/dts/at91-sama5d2_ptc_ek.dts b/arch/arm/boot/dts/at91-sama5d2_ptc_ek.dts
+index ba7f3e646c26..1333a68b9373 100644
+--- a/arch/arm/boot/dts/at91-sama5d2_ptc_ek.dts
++++ b/arch/arm/boot/dts/at91-sama5d2_ptc_ek.dts
+@@ -125,8 +125,6 @@
+ 			bus-width = <8>;
+ 			pinctrl-names = "default";
+ 			pinctrl-0 = <&pinctrl_sdmmc0_default>;
+-			non-removable;
+-			mmc-ddr-1_8v;
+ 			status = "okay";
+ 		};
+ 
+diff --git a/arch/arm/include/asm/kvm_emulate.h b/arch/arm/include/asm/kvm_emulate.h
+index 3944305e81df..b26c1aaf1e3c 100644
+--- a/arch/arm/include/asm/kvm_emulate.h
++++ b/arch/arm/include/asm/kvm_emulate.h
+@@ -367,6 +367,7 @@ static inline unsigned long vcpu_data_host_to_guest(struct kvm_vcpu *vcpu,
+ 	}
+ }
+ 
+-static inline void vcpu_ptrauth_setup_lazy(struct kvm_vcpu *vcpu) {}
++static inline bool vcpu_has_ptrauth(struct kvm_vcpu *vcpu) { return false; }
++static inline void vcpu_ptrauth_disable(struct kvm_vcpu *vcpu) { }
+ 
+ #endif /* __ARM_KVM_EMULATE_H__ */
+diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
+index a827b4d60d38..03932e172730 100644
+--- a/arch/arm/include/asm/kvm_host.h
++++ b/arch/arm/include/asm/kvm_host.h
+@@ -453,4 +453,6 @@ static inline bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu)
+ 	return true;
+ }
+ 
++#define kvm_arm_vcpu_loaded(vcpu)	(false)
++
+ #endif /* __ARM_KVM_HOST_H__ */
+diff --git a/arch/arm/kernel/ptrace.c b/arch/arm/kernel/ptrace.c
+index b606cded90cd..4cc6a7eff635 100644
+--- a/arch/arm/kernel/ptrace.c
++++ b/arch/arm/kernel/ptrace.c
+@@ -219,8 +219,8 @@ static struct undef_hook arm_break_hook = {
+ };
+ 
+ static struct undef_hook thumb_break_hook = {
+-	.instr_mask	= 0xffff,
+-	.instr_val	= 0xde01,
++	.instr_mask	= 0xffffffff,
++	.instr_val	= 0x0000de01,
+ 	.cpsr_mask	= PSR_T_BIT,
+ 	.cpsr_val	= PSR_T_BIT,
+ 	.fn		= break_trap,
+diff --git a/arch/arm64/include/asm/acpi.h b/arch/arm64/include/asm/acpi.h
+index b263e239cb59..a45366c3909b 100644
+--- a/arch/arm64/include/asm/acpi.h
++++ b/arch/arm64/include/asm/acpi.h
+@@ -12,6 +12,7 @@
+ #include <linux/efi.h>
+ #include <linux/memblock.h>
+ #include <linux/psci.h>
++#include <linux/stddef.h>
+ 
+ #include <asm/cputype.h>
+ #include <asm/io.h>
+@@ -31,14 +32,14 @@
+  * is therefore used to delimit the MADT GICC structure minimum length
+  * appropriately.
+  */
+-#define ACPI_MADT_GICC_MIN_LENGTH   ACPI_OFFSET(  \
++#define ACPI_MADT_GICC_MIN_LENGTH   offsetof(  \
+ 	struct acpi_madt_generic_interrupt, efficiency_class)
+ 
+ #define BAD_MADT_GICC_ENTRY(entry, end)					\
+ 	(!(entry) || (entry)->header.length < ACPI_MADT_GICC_MIN_LENGTH || \
+ 	(unsigned long)(entry) + (entry)->header.length > (end))
+ 
+-#define ACPI_MADT_GICC_SPE  (ACPI_OFFSET(struct acpi_madt_generic_interrupt, \
++#define ACPI_MADT_GICC_SPE  (offsetof(struct acpi_madt_generic_interrupt, \
+ 	spe_interrupt) + sizeof(u16))
+ 
+ /* Basic configuration for ACPI */
+diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
+index f658dda12364..0ab02e5ff712 100644
+--- a/arch/arm64/include/asm/kvm_emulate.h
++++ b/arch/arm64/include/asm/kvm_emulate.h
+@@ -111,12 +111,6 @@ static inline void vcpu_ptrauth_disable(struct kvm_vcpu *vcpu)
+ 	vcpu->arch.hcr_el2 &= ~(HCR_API | HCR_APK);
+ }
+ 
+-static inline void vcpu_ptrauth_setup_lazy(struct kvm_vcpu *vcpu)
+-{
+-	if (vcpu_has_ptrauth(vcpu))
+-		vcpu_ptrauth_disable(vcpu);
+-}
+-
+ static inline unsigned long vcpu_get_vsesr(struct kvm_vcpu *vcpu)
+ {
+ 	return vcpu->arch.vsesr_el2;
+diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
+index 57fd46acd058..584d9792cbfe 100644
+--- a/arch/arm64/include/asm/kvm_host.h
++++ b/arch/arm64/include/asm/kvm_host.h
+@@ -404,8 +404,10 @@ void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg);
+  * CP14 and CP15 live in the same array, as they are backed by the
+  * same system registers.
+  */
+-#define vcpu_cp14(v,r)		((v)->arch.ctxt.copro[(r)])
+-#define vcpu_cp15(v,r)		((v)->arch.ctxt.copro[(r)])
++#define CPx_BIAS		IS_ENABLED(CONFIG_CPU_BIG_ENDIAN)
++
++#define vcpu_cp14(v,r)		((v)->arch.ctxt.copro[(r) ^ CPx_BIAS])
++#define vcpu_cp15(v,r)		((v)->arch.ctxt.copro[(r) ^ CPx_BIAS])
+ 
+ struct kvm_vm_stat {
+ 	ulong remote_tlb_flush;
+@@ -683,4 +685,6 @@ bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu);
+ #define kvm_arm_vcpu_sve_finalized(vcpu) \
+ 	((vcpu)->arch.flags & KVM_ARM64_VCPU_SVE_FINALIZED)
+ 
++#define kvm_arm_vcpu_loaded(vcpu)	((vcpu)->arch.sysregs_loaded_on_cpu)
++
+ #endif /* __ARM64_KVM_HOST_H__ */
+diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
+index aacfc55de44c..e0a4bcdb9451 100644
+--- a/arch/arm64/kvm/handle_exit.c
++++ b/arch/arm64/kvm/handle_exit.c
+@@ -162,31 +162,16 @@ static int handle_sve(struct kvm_vcpu *vcpu, struct kvm_run *run)
+ 	return 1;
+ }
+ 
+-#define __ptrauth_save_key(regs, key)						\
+-({										\
+-	regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1);	\
+-	regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1);	\
+-})
+-
+ /*
+  * Handle the guest trying to use a ptrauth instruction, or trying to access a
+  * ptrauth register.
+  */
+ void kvm_arm_vcpu_ptrauth_trap(struct kvm_vcpu *vcpu)
+ {
+-	struct kvm_cpu_context *ctxt;
+-
+-	if (vcpu_has_ptrauth(vcpu)) {
++	if (vcpu_has_ptrauth(vcpu))
+ 		vcpu_ptrauth_enable(vcpu);
+-		ctxt = vcpu->arch.host_cpu_context;
+-		__ptrauth_save_key(ctxt->sys_regs, APIA);
+-		__ptrauth_save_key(ctxt->sys_regs, APIB);
+-		__ptrauth_save_key(ctxt->sys_regs, APDA);
+-		__ptrauth_save_key(ctxt->sys_regs, APDB);
+-		__ptrauth_save_key(ctxt->sys_regs, APGA);
+-	} else {
++	else
+ 		kvm_inject_undefined(vcpu);
+-	}
+ }
+ 
+ /*
+diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
+index 3e909b117f0c..c3d15eaa9ae6 100644
+--- a/arch/arm64/kvm/sys_regs.c
++++ b/arch/arm64/kvm/sys_regs.c
+@@ -1280,10 +1280,16 @@ static bool access_clidr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+ static bool access_csselr(struct kvm_vcpu *vcpu, struct sys_reg_params *p,
+ 			  const struct sys_reg_desc *r)
+ {
++	int reg = r->reg;
++
++	/* See the 32bit mapping in kvm_host.h */
++	if (p->is_aarch32)
++		reg = r->reg / 2;
++
+ 	if (p->is_write)
+-		vcpu_write_sys_reg(vcpu, p->regval, r->reg);
++		vcpu_write_sys_reg(vcpu, p->regval, reg);
+ 	else
+-		p->regval = vcpu_read_sys_reg(vcpu, r->reg);
++		p->regval = vcpu_read_sys_reg(vcpu, reg);
+ 	return true;
+ }
+ 
+diff --git a/arch/csky/abiv2/inc/abi/entry.h b/arch/csky/abiv2/inc/abi/entry.h
+index 9023828ede97..ac8f65a3e75a 100644
+--- a/arch/csky/abiv2/inc/abi/entry.h
++++ b/arch/csky/abiv2/inc/abi/entry.h
+@@ -13,6 +13,8 @@
+ #define LSAVE_A1	28
+ #define LSAVE_A2	32
+ #define LSAVE_A3	36
++#define LSAVE_A4	40
++#define LSAVE_A5	44
+ 
+ #define KSPTOUSP
+ #define USPTOKSP
+diff --git a/arch/csky/kernel/entry.S b/arch/csky/kernel/entry.S
+index 9718388448a4..ff908d28f0a0 100644
+--- a/arch/csky/kernel/entry.S
++++ b/arch/csky/kernel/entry.S
+@@ -170,8 +170,10 @@ csky_syscall_trace:
+ 	ldw	a3, (sp, LSAVE_A3)
+ #if defined(__CSKYABIV2__)
+ 	subi	sp, 8
+-	stw	r5, (sp, 0x4)
+-	stw	r4, (sp, 0x0)
++	ldw	r9, (sp, LSAVE_A4)
++	stw	r9, (sp, 0x0)
++	ldw	r9, (sp, LSAVE_A5)
++	stw	r9, (sp, 0x4)
+ #else
+ 	ldw	r6, (sp, LSAVE_A4)
+ 	ldw	r7, (sp, LSAVE_A5)
+diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h
+index 41204a49cf95..7b47a323dc23 100644
+--- a/arch/mips/include/asm/kvm_host.h
++++ b/arch/mips/include/asm/kvm_host.h
+@@ -274,8 +274,12 @@ enum emulation_result {
+ #define MIPS3_PG_SHIFT		6
+ #define MIPS3_PG_FRAME		0x3fffffc0
+ 
++#if defined(CONFIG_64BIT)
++#define VPN2_MASK		GENMASK(cpu_vmbits - 1, 13)
++#else
+ #define VPN2_MASK		0xffffe000
+-#define KVM_ENTRYHI_ASID	MIPS_ENTRYHI_ASID
++#endif
++#define KVM_ENTRYHI_ASID	cpu_asid_mask(&boot_cpu_data)
+ #define TLB_IS_GLOBAL(x)	((x).tlb_lo[0] & (x).tlb_lo[1] & ENTRYLO_G)
+ #define TLB_VPN2(x)		((x).tlb_hi & VPN2_MASK)
+ #define TLB_ASID(x)		((x).tlb_hi & KVM_ENTRYHI_ASID)
+diff --git a/arch/powerpc/kernel/vmlinux.lds.S b/arch/powerpc/kernel/vmlinux.lds.S
+index a32d478a7f41..b4c89a1acebb 100644
+--- a/arch/powerpc/kernel/vmlinux.lds.S
++++ b/arch/powerpc/kernel/vmlinux.lds.S
+@@ -303,12 +303,6 @@ SECTIONS
+ 		*(.branch_lt)
+ 	}
+ 
+-#ifdef CONFIG_DEBUG_INFO_BTF
+-	.BTF : AT(ADDR(.BTF) - LOAD_OFFSET) {
+-		*(.BTF)
+-	}
+-#endif
+-
+ 	.opd : AT(ADDR(.opd) - LOAD_OFFSET) {
+ 		__start_opd = .;
+ 		KEEP(*(.opd))
+diff --git a/arch/powerpc/mm/ptdump/ptdump.c b/arch/powerpc/mm/ptdump/ptdump.c
+index 206156255247..7f3faf0dea25 100644
+--- a/arch/powerpc/mm/ptdump/ptdump.c
++++ b/arch/powerpc/mm/ptdump/ptdump.c
+@@ -60,6 +60,7 @@ struct pg_state {
+ 	unsigned long start_address;
+ 	unsigned long start_pa;
+ 	unsigned long last_pa;
++	unsigned long page_size;
+ 	unsigned int level;
+ 	u64 current_flags;
+ 	bool check_wx;
+@@ -157,9 +158,9 @@ static void dump_addr(struct pg_state *st, unsigned long addr)
+ #endif
+ 
+ 	pt_dump_seq_printf(st->seq, REG "-" REG " ", st->start_address, addr - 1);
+-	if (st->start_pa == st->last_pa && st->start_address + PAGE_SIZE != addr) {
++	if (st->start_pa == st->last_pa && st->start_address + st->page_size != addr) {
+ 		pt_dump_seq_printf(st->seq, "[" REG "]", st->start_pa);
+-		delta = PAGE_SIZE >> 10;
++		delta = st->page_size >> 10;
+ 	} else {
+ 		pt_dump_seq_printf(st->seq, " " REG " ", st->start_pa);
+ 		delta = (addr - st->start_address) >> 10;
+@@ -190,7 +191,7 @@ static void note_prot_wx(struct pg_state *st, unsigned long addr)
+ }
+ 
+ static void note_page(struct pg_state *st, unsigned long addr,
+-	       unsigned int level, u64 val)
++	       unsigned int level, u64 val, unsigned long page_size)
+ {
+ 	u64 flag = val & pg_level[level].mask;
+ 	u64 pa = val & PTE_RPN_MASK;
+@@ -202,6 +203,7 @@ static void note_page(struct pg_state *st, unsigned long addr,
+ 		st->start_address = addr;
+ 		st->start_pa = pa;
+ 		st->last_pa = pa;
++		st->page_size = page_size;
+ 		pt_dump_seq_printf(st->seq, "---[ %s ]---\n", st->marker->name);
+ 	/*
+ 	 * Dump the section of virtual memory when:
+@@ -213,7 +215,7 @@ static void note_page(struct pg_state *st, unsigned long addr,
+ 	 */
+ 	} else if (flag != st->current_flags || level != st->level ||
+ 		   addr >= st->marker[1].start_address ||
+-		   (pa != st->last_pa + PAGE_SIZE &&
++		   (pa != st->last_pa + st->page_size &&
+ 		    (pa != st->start_pa || st->start_pa != st->last_pa))) {
+ 
+ 		/* Check the PTE flags */
+@@ -241,6 +243,7 @@ static void note_page(struct pg_state *st, unsigned long addr,
+ 		st->start_address = addr;
+ 		st->start_pa = pa;
+ 		st->last_pa = pa;
++		st->page_size = page_size;
+ 		st->current_flags = flag;
+ 		st->level = level;
+ 	} else {
+@@ -256,7 +259,7 @@ static void walk_pte(struct pg_state *st, pmd_t *pmd, unsigned long start)
+ 
+ 	for (i = 0; i < PTRS_PER_PTE; i++, pte++) {
+ 		addr = start + i * PAGE_SIZE;
+-		note_page(st, addr, 4, pte_val(*pte));
++		note_page(st, addr, 4, pte_val(*pte), PAGE_SIZE);
+ 
+ 	}
+ }
+@@ -273,7 +276,7 @@ static void walk_pmd(struct pg_state *st, pud_t *pud, unsigned long start)
+ 			/* pmd exists */
+ 			walk_pte(st, pmd, addr);
+ 		else
+-			note_page(st, addr, 3, pmd_val(*pmd));
++			note_page(st, addr, 3, pmd_val(*pmd), PMD_SIZE);
+ 	}
+ }
+ 
+@@ -289,7 +292,7 @@ static void walk_pud(struct pg_state *st, pgd_t *pgd, unsigned long start)
+ 			/* pud exists */
+ 			walk_pmd(st, pud, addr);
+ 		else
+-			note_page(st, addr, 2, pud_val(*pud));
++			note_page(st, addr, 2, pud_val(*pud), PUD_SIZE);
+ 	}
+ }
+ 
+@@ -308,7 +311,7 @@ static void walk_pagetables(struct pg_state *st)
+ 			/* pgd exists */
+ 			walk_pud(st, pgd, addr);
+ 		else
+-			note_page(st, addr, 1, pgd_val(*pgd));
++			note_page(st, addr, 1, pgd_val(*pgd), PGDIR_SIZE);
+ 	}
+ }
+ 
+@@ -363,7 +366,7 @@ static int ptdump_show(struct seq_file *m, void *v)
+ 
+ 	/* Traverse kernel page tables */
+ 	walk_pagetables(&st);
+-	note_page(&st, 0, 0, 0);
++	note_page(&st, 0, 0, 0, 0);
+ 	return 0;
+ }
+ 
+diff --git a/arch/powerpc/sysdev/xive/common.c b/arch/powerpc/sysdev/xive/common.c
+index fe8d396e2301..16df9cc8f360 100644
+--- a/arch/powerpc/sysdev/xive/common.c
++++ b/arch/powerpc/sysdev/xive/common.c
+@@ -19,6 +19,7 @@
+ #include <linux/slab.h>
+ #include <linux/spinlock.h>
+ #include <linux/msi.h>
++#include <linux/vmalloc.h>
+ 
+ #include <asm/prom.h>
+ #include <asm/io.h>
+@@ -1013,12 +1014,16 @@ EXPORT_SYMBOL_GPL(is_xive_irq);
+ void xive_cleanup_irq_data(struct xive_irq_data *xd)
+ {
+ 	if (xd->eoi_mmio) {
++		unmap_kernel_range((unsigned long)xd->eoi_mmio,
++				   1u << xd->esb_shift);
+ 		iounmap(xd->eoi_mmio);
+ 		if (xd->eoi_mmio == xd->trig_mmio)
+ 			xd->trig_mmio = NULL;
+ 		xd->eoi_mmio = NULL;
+ 	}
+ 	if (xd->trig_mmio) {
++		unmap_kernel_range((unsigned long)xd->trig_mmio,
++				   1u << xd->esb_shift);
+ 		iounmap(xd->trig_mmio);
+ 		xd->trig_mmio = NULL;
+ 	}
+diff --git a/arch/s390/pci/pci_clp.c b/arch/s390/pci/pci_clp.c
+index 0d3d8f170ea4..25208fa95426 100644
+--- a/arch/s390/pci/pci_clp.c
++++ b/arch/s390/pci/pci_clp.c
+@@ -309,14 +309,13 @@ out:
+ 
+ int clp_disable_fh(struct zpci_dev *zdev)
+ {
+-	u32 fh = zdev->fh;
+ 	int rc;
+ 
+ 	if (!zdev_enabled(zdev))
+ 		return 0;
+ 
+ 	rc = clp_set_pci_fn(zdev, 0, CLP_SET_DISABLE_PCI_FN);
+-	zpci_dbg(3, "dis fid:%x, fh:%x, rc:%d\n", zdev->fid, fh, rc);
++	zpci_dbg(3, "dis fid:%x, fh:%x, rc:%d\n", zdev->fid, zdev->fh, rc);
+ 	return rc;
+ }
+ 
+diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
+index dff6623804c2..dae71ebfa709 100644
+--- a/arch/x86/events/intel/core.c
++++ b/arch/x86/events/intel/core.c
+@@ -1892,8 +1892,8 @@ static __initconst const u64 tnt_hw_cache_extra_regs
+ 
+ static struct extra_reg intel_tnt_extra_regs[] __read_mostly = {
+ 	/* must define OFFCORE_RSP_X first, see intel_fixup_er() */
+-	INTEL_UEVENT_EXTRA_REG(0x01b7, MSR_OFFCORE_RSP_0, 0xffffff9fffull, RSP_0),
+-	INTEL_UEVENT_EXTRA_REG(0x02b7, MSR_OFFCORE_RSP_1, 0xffffff9fffull, RSP_1),
++	INTEL_UEVENT_EXTRA_REG(0x01b7, MSR_OFFCORE_RSP_0, 0x800ff0ffffff9fffull, RSP_0),
++	INTEL_UEVENT_EXTRA_REG(0x02b7, MSR_OFFCORE_RSP_1, 0xff0ffffff9fffull, RSP_1),
+ 	EVENT_EXTRA_END
+ };
+ 
+diff --git a/arch/x86/include/asm/set_memory.h b/arch/x86/include/asm/set_memory.h
+index 64c3dce374e5..688d6f1e7a63 100644
+--- a/arch/x86/include/asm/set_memory.h
++++ b/arch/x86/include/asm/set_memory.h
+@@ -83,28 +83,35 @@ int set_direct_map_default_noflush(struct page *page);
+ extern int kernel_set_to_readonly;
+ 
+ #ifdef CONFIG_X86_64
+-static inline int set_mce_nospec(unsigned long pfn)
++/*
++ * Prevent speculative access to the page by either unmapping
++ * it (if we do not require access to any part of the page) or
++ * marking it uncacheable (if we want to try to retrieve data
++ * from non-poisoned lines in the page).
++ */
++static inline int set_mce_nospec(unsigned long pfn, bool unmap)
+ {
+ 	unsigned long decoy_addr;
+ 	int rc;
+ 
+ 	/*
+-	 * Mark the linear address as UC to make sure we don't log more
+-	 * errors because of speculative access to the page.
+ 	 * We would like to just call:
+-	 *      set_memory_uc((unsigned long)pfn_to_kaddr(pfn), 1);
++	 *      set_memory_XX((unsigned long)pfn_to_kaddr(pfn), 1);
+ 	 * but doing that would radically increase the odds of a
+ 	 * speculative access to the poison page because we'd have
+ 	 * the virtual address of the kernel 1:1 mapping sitting
+ 	 * around in registers.
+ 	 * Instead we get tricky.  We create a non-canonical address
+ 	 * that looks just like the one we want, but has bit 63 flipped.
+-	 * This relies on set_memory_uc() properly sanitizing any __pa()
++	 * This relies on set_memory_XX() properly sanitizing any __pa()
+ 	 * results with __PHYSICAL_MASK or PTE_PFN_MASK.
+ 	 */
+ 	decoy_addr = (pfn << PAGE_SHIFT) + (PAGE_OFFSET ^ BIT(63));
+ 
+-	rc = set_memory_uc(decoy_addr, 1);
++	if (unmap)
++		rc = set_memory_np(decoy_addr, 1);
++	else
++		rc = set_memory_uc(decoy_addr, 1);
+ 	if (rc)
+ 		pr_warn("Could not invalidate pfn=0x%lx from 1:1 map\n", pfn);
+ 	return rc;
+diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
+index 1f875fbe1384..f04cc01e629e 100644
+--- a/arch/x86/kernel/cpu/amd.c
++++ b/arch/x86/kernel/cpu/amd.c
+@@ -1111,8 +1111,7 @@ static const int amd_erratum_383[] =
+ 
+ /* #1054: Instructions Retired Performance Counter May Be Inaccurate */
+ static const int amd_erratum_1054[] =
+-	AMD_OSVW_ERRATUM(0, AMD_MODEL_RANGE(0x17, 0, 0, 0x2f, 0xf));
+-
++	AMD_LEGACY_ERRATUM(AMD_MODEL_RANGE(0x17, 0, 0, 0x2f, 0xf));
+ 
+ static bool cpu_has_amd_erratum(struct cpuinfo_x86 *cpu, const int *erratum)
+ {
+diff --git a/arch/x86/kernel/cpu/bugs.c b/arch/x86/kernel/cpu/bugs.c
+index 56978cb06149..b53dcff21438 100644
+--- a/arch/x86/kernel/cpu/bugs.c
++++ b/arch/x86/kernel/cpu/bugs.c
+@@ -588,7 +588,9 @@ early_param("nospectre_v1", nospectre_v1_cmdline);
+ static enum spectre_v2_mitigation spectre_v2_enabled __ro_after_init =
+ 	SPECTRE_V2_NONE;
+ 
+-static enum spectre_v2_user_mitigation spectre_v2_user __ro_after_init =
++static enum spectre_v2_user_mitigation spectre_v2_user_stibp __ro_after_init =
++	SPECTRE_V2_USER_NONE;
++static enum spectre_v2_user_mitigation spectre_v2_user_ibpb __ro_after_init =
+ 	SPECTRE_V2_USER_NONE;
+ 
+ #ifdef CONFIG_RETPOLINE
+@@ -734,15 +736,6 @@ spectre_v2_user_select_mitigation(enum spectre_v2_mitigation_cmd v2_cmd)
+ 		break;
+ 	}
+ 
+-	/*
+-	 * At this point, an STIBP mode other than "off" has been set.
+-	 * If STIBP support is not being forced, check if STIBP always-on
+-	 * is preferred.
+-	 */
+-	if (mode != SPECTRE_V2_USER_STRICT &&
+-	    boot_cpu_has(X86_FEATURE_AMD_STIBP_ALWAYS_ON))
+-		mode = SPECTRE_V2_USER_STRICT_PREFERRED;
+-
+ 	/* Initialize Indirect Branch Prediction Barrier */
+ 	if (boot_cpu_has(X86_FEATURE_IBPB)) {
+ 		setup_force_cpu_cap(X86_FEATURE_USE_IBPB);
+@@ -765,23 +758,36 @@ spectre_v2_user_select_mitigation(enum spectre_v2_mitigation_cmd v2_cmd)
+ 		pr_info("mitigation: Enabling %s Indirect Branch Prediction Barrier\n",
+ 			static_key_enabled(&switch_mm_always_ibpb) ?
+ 			"always-on" : "conditional");
++
++		spectre_v2_user_ibpb = mode;
+ 	}
+ 
+-	/* If enhanced IBRS is enabled no STIBP required */
+-	if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
++	/*
++	 * If enhanced IBRS is enabled or SMT impossible, STIBP is not
++	 * required.
++	 */
++	if (!smt_possible || spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
+ 		return;
+ 
+ 	/*
+-	 * If SMT is not possible or STIBP is not available clear the STIBP
+-	 * mode.
++	 * At this point, an STIBP mode other than "off" has been set.
++	 * If STIBP support is not being forced, check if STIBP always-on
++	 * is preferred.
+ 	 */
+-	if (!smt_possible || !boot_cpu_has(X86_FEATURE_STIBP))
++	if (mode != SPECTRE_V2_USER_STRICT &&
++	    boot_cpu_has(X86_FEATURE_AMD_STIBP_ALWAYS_ON))
++		mode = SPECTRE_V2_USER_STRICT_PREFERRED;
++
++	/*
++	 * If STIBP is not available, clear the STIBP mode.
++	 */
++	if (!boot_cpu_has(X86_FEATURE_STIBP))
+ 		mode = SPECTRE_V2_USER_NONE;
++
++	spectre_v2_user_stibp = mode;
++
+ set_mode:
+-	spectre_v2_user = mode;
+-	/* Only print the STIBP mode when SMT possible */
+-	if (smt_possible)
+-		pr_info("%s\n", spectre_v2_user_strings[mode]);
++	pr_info("%s\n", spectre_v2_user_strings[mode]);
+ }
+ 
+ static const char * const spectre_v2_strings[] = {
+@@ -1014,7 +1020,7 @@ void cpu_bugs_smt_update(void)
+ {
+ 	mutex_lock(&spec_ctrl_mutex);
+ 
+-	switch (spectre_v2_user) {
++	switch (spectre_v2_user_stibp) {
+ 	case SPECTRE_V2_USER_NONE:
+ 		break;
+ 	case SPECTRE_V2_USER_STRICT:
+@@ -1257,14 +1263,19 @@ static int ib_prctl_set(struct task_struct *task, unsigned long ctrl)
+ {
+ 	switch (ctrl) {
+ 	case PR_SPEC_ENABLE:
+-		if (spectre_v2_user == SPECTRE_V2_USER_NONE)
++		if (spectre_v2_user_ibpb == SPECTRE_V2_USER_NONE &&
++		    spectre_v2_user_stibp == SPECTRE_V2_USER_NONE)
+ 			return 0;
+ 		/*
+ 		 * Indirect branch speculation is always disabled in strict
+-		 * mode.
++		 * mode. It can neither be enabled if it was force-disabled
++		 * by a  previous prctl call.
++
+ 		 */
+-		if (spectre_v2_user == SPECTRE_V2_USER_STRICT ||
+-		    spectre_v2_user == SPECTRE_V2_USER_STRICT_PREFERRED)
++		if (spectre_v2_user_ibpb == SPECTRE_V2_USER_STRICT ||
++		    spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT ||
++		    spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT_PREFERRED ||
++		    task_spec_ib_force_disable(task))
+ 			return -EPERM;
+ 		task_clear_spec_ib_disable(task);
+ 		task_update_spec_tif(task);
+@@ -1275,10 +1286,12 @@ static int ib_prctl_set(struct task_struct *task, unsigned long ctrl)
+ 		 * Indirect branch speculation is always allowed when
+ 		 * mitigation is force disabled.
+ 		 */
+-		if (spectre_v2_user == SPECTRE_V2_USER_NONE)
++		if (spectre_v2_user_ibpb == SPECTRE_V2_USER_NONE &&
++		    spectre_v2_user_stibp == SPECTRE_V2_USER_NONE)
+ 			return -EPERM;
+-		if (spectre_v2_user == SPECTRE_V2_USER_STRICT ||
+-		    spectre_v2_user == SPECTRE_V2_USER_STRICT_PREFERRED)
++		if (spectre_v2_user_ibpb == SPECTRE_V2_USER_STRICT ||
++		    spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT ||
++		    spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT_PREFERRED)
+ 			return 0;
+ 		task_set_spec_ib_disable(task);
+ 		if (ctrl == PR_SPEC_FORCE_DISABLE)
+@@ -1309,7 +1322,8 @@ void arch_seccomp_spec_mitigate(struct task_struct *task)
+ {
+ 	if (ssb_mode == SPEC_STORE_BYPASS_SECCOMP)
+ 		ssb_prctl_set(task, PR_SPEC_FORCE_DISABLE);
+-	if (spectre_v2_user == SPECTRE_V2_USER_SECCOMP)
++	if (spectre_v2_user_ibpb == SPECTRE_V2_USER_SECCOMP ||
++	    spectre_v2_user_stibp == SPECTRE_V2_USER_SECCOMP)
+ 		ib_prctl_set(task, PR_SPEC_FORCE_DISABLE);
+ }
+ #endif
+@@ -1340,22 +1354,24 @@ static int ib_prctl_get(struct task_struct *task)
+ 	if (!boot_cpu_has_bug(X86_BUG_SPECTRE_V2))
+ 		return PR_SPEC_NOT_AFFECTED;
+ 
+-	switch (spectre_v2_user) {
+-	case SPECTRE_V2_USER_NONE:
++	if (spectre_v2_user_ibpb == SPECTRE_V2_USER_NONE &&
++	    spectre_v2_user_stibp == SPECTRE_V2_USER_NONE)
+ 		return PR_SPEC_ENABLE;
+-	case SPECTRE_V2_USER_PRCTL:
+-	case SPECTRE_V2_USER_SECCOMP:
++	else if (spectre_v2_user_ibpb == SPECTRE_V2_USER_STRICT ||
++	    spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT ||
++	    spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT_PREFERRED)
++		return PR_SPEC_DISABLE;
++	else if (spectre_v2_user_ibpb == SPECTRE_V2_USER_PRCTL ||
++	    spectre_v2_user_ibpb == SPECTRE_V2_USER_SECCOMP ||
++	    spectre_v2_user_stibp == SPECTRE_V2_USER_PRCTL ||
++	    spectre_v2_user_stibp == SPECTRE_V2_USER_SECCOMP) {
+ 		if (task_spec_ib_force_disable(task))
+ 			return PR_SPEC_PRCTL | PR_SPEC_FORCE_DISABLE;
+ 		if (task_spec_ib_disable(task))
+ 			return PR_SPEC_PRCTL | PR_SPEC_DISABLE;
+ 		return PR_SPEC_PRCTL | PR_SPEC_ENABLE;
+-	case SPECTRE_V2_USER_STRICT:
+-	case SPECTRE_V2_USER_STRICT_PREFERRED:
+-		return PR_SPEC_DISABLE;
+-	default:
++	} else
+ 		return PR_SPEC_NOT_AFFECTED;
+-	}
+ }
+ 
+ int arch_prctl_spec_ctrl_get(struct task_struct *task, unsigned long which)
+@@ -1594,7 +1610,7 @@ static char *stibp_state(void)
+ 	if (spectre_v2_enabled == SPECTRE_V2_IBRS_ENHANCED)
+ 		return "";
+ 
+-	switch (spectre_v2_user) {
++	switch (spectre_v2_user_stibp) {
+ 	case SPECTRE_V2_USER_NONE:
+ 		return ", STIBP: disabled";
+ 	case SPECTRE_V2_USER_STRICT:
+diff --git a/arch/x86/kernel/cpu/mce/core.c b/arch/x86/kernel/cpu/mce/core.c
+index 2c4f949611e4..410d3868bf33 100644
+--- a/arch/x86/kernel/cpu/mce/core.c
++++ b/arch/x86/kernel/cpu/mce/core.c
+@@ -527,6 +527,13 @@ bool mce_is_memory_error(struct mce *m)
+ }
+ EXPORT_SYMBOL_GPL(mce_is_memory_error);
+ 
++static bool whole_page(struct mce *m)
++{
++	if (!mca_cfg.ser || !(m->status & MCI_STATUS_MISCV))
++		return true;
++	return MCI_MISC_ADDR_LSB(m->misc) >= PAGE_SHIFT;
++}
++
+ bool mce_is_correctable(struct mce *m)
+ {
+ 	if (m->cpuvendor == X86_VENDOR_AMD && m->status & MCI_STATUS_DEFERRED)
+@@ -598,7 +605,7 @@ static int uc_decode_notifier(struct notifier_block *nb, unsigned long val,
+ 
+ 	pfn = mce->addr >> PAGE_SHIFT;
+ 	if (!memory_failure(pfn, 0))
+-		set_mce_nospec(pfn);
++		set_mce_nospec(pfn, whole_page(mce));
+ 
+ 	return NOTIFY_OK;
+ }
+@@ -1096,7 +1103,7 @@ static int do_memory_failure(struct mce *m)
+ 	if (ret)
+ 		pr_err("Memory error not recovered");
+ 	else
+-		set_mce_nospec(m->addr >> PAGE_SHIFT);
++		set_mce_nospec(m->addr >> PAGE_SHIFT, whole_page(m));
+ 	return ret;
+ }
+ 
+diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
+index 9898f672b81d..3d88300ec306 100644
+--- a/arch/x86/kernel/process.c
++++ b/arch/x86/kernel/process.c
+@@ -546,28 +546,20 @@ static __always_inline void __speculation_ctrl_update(unsigned long tifp,
+ 
+ 	lockdep_assert_irqs_disabled();
+ 
+-	/*
+-	 * If TIF_SSBD is different, select the proper mitigation
+-	 * method. Note that if SSBD mitigation is disabled or permanentely
+-	 * enabled this branch can't be taken because nothing can set
+-	 * TIF_SSBD.
+-	 */
+-	if (tif_diff & _TIF_SSBD) {
+-		if (static_cpu_has(X86_FEATURE_VIRT_SSBD)) {
++	/* Handle change of TIF_SSBD depending on the mitigation method. */
++	if (static_cpu_has(X86_FEATURE_VIRT_SSBD)) {
++		if (tif_diff & _TIF_SSBD)
+ 			amd_set_ssb_virt_state(tifn);
+-		} else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD)) {
++	} else if (static_cpu_has(X86_FEATURE_LS_CFG_SSBD)) {
++		if (tif_diff & _TIF_SSBD)
+ 			amd_set_core_ssb_state(tifn);
+-		} else if (static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD) ||
+-			   static_cpu_has(X86_FEATURE_AMD_SSBD)) {
+-			msr |= ssbd_tif_to_spec_ctrl(tifn);
+-			updmsr  = true;
+-		}
++	} else if (static_cpu_has(X86_FEATURE_SPEC_CTRL_SSBD) ||
++		   static_cpu_has(X86_FEATURE_AMD_SSBD)) {
++		updmsr |= !!(tif_diff & _TIF_SSBD);
++		msr |= ssbd_tif_to_spec_ctrl(tifn);
+ 	}
+ 
+-	/*
+-	 * Only evaluate TIF_SPEC_IB if conditional STIBP is enabled,
+-	 * otherwise avoid the MSR write.
+-	 */
++	/* Only evaluate TIF_SPEC_IB if conditional STIBP is enabled. */
+ 	if (IS_ENABLED(CONFIG_SMP) &&
+ 	    static_branch_unlikely(&switch_to_cond_stibp)) {
+ 		updmsr |= !!(tif_diff & _TIF_SPEC_IB);
+diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c
+index 0cc7c0b106bb..762f5c1465a6 100644
+--- a/arch/x86/kernel/reboot.c
++++ b/arch/x86/kernel/reboot.c
+@@ -197,6 +197,14 @@ static const struct dmi_system_id reboot_dmi_table[] __initconst = {
+ 			DMI_MATCH(DMI_PRODUCT_NAME, "MacBook5"),
+ 		},
+ 	},
++	{	/* Handle problems with rebooting on Apple MacBook6,1 */
++		.callback = set_pci_reboot,
++		.ident = "Apple MacBook6,1",
++		.matches = {
++			DMI_MATCH(DMI_SYS_VENDOR, "Apple Inc."),
++			DMI_MATCH(DMI_PRODUCT_NAME, "MacBook6,1"),
++		},
++	},
+ 	{	/* Handle problems with rebooting on Apple MacBookPro5 */
+ 		.callback = set_pci_reboot,
+ 		.ident = "Apple MacBookPro5",
+diff --git a/arch/x86/kernel/time.c b/arch/x86/kernel/time.c
+index d8673d8a779b..36a585b80d9e 100644
+--- a/arch/x86/kernel/time.c
++++ b/arch/x86/kernel/time.c
+@@ -25,10 +25,6 @@
+ #include <asm/hpet.h>
+ #include <asm/time.h>
+ 
+-#ifdef CONFIG_X86_64
+-__visible volatile unsigned long jiffies __cacheline_aligned_in_smp = INITIAL_JIFFIES;
+-#endif
+-
+ unsigned long profile_pc(struct pt_regs *regs)
+ {
+ 	unsigned long pc = instruction_pointer(regs);
+diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
+index e3296aa028fe..ccb2dec210ef 100644
+--- a/arch/x86/kernel/vmlinux.lds.S
++++ b/arch/x86/kernel/vmlinux.lds.S
+@@ -39,13 +39,13 @@ OUTPUT_FORMAT(CONFIG_OUTPUT_FORMAT)
+ #ifdef CONFIG_X86_32
+ OUTPUT_ARCH(i386)
+ ENTRY(phys_startup_32)
+-jiffies = jiffies_64;
+ #else
+ OUTPUT_ARCH(i386:x86-64)
+ ENTRY(phys_startup_64)
+-jiffies_64 = jiffies;
+ #endif
+ 
++jiffies = jiffies_64;
++
+ #if defined(CONFIG_X86_64)
+ /*
+  * On 64-bit, align RODATA to 2MB so we retain large page mappings for
+diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
+index 87e9ba27ada1..ea6fa05e2fd9 100644
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -343,6 +343,8 @@ void kvm_mmu_set_mmio_spte_mask(u64 mmio_mask, u64 mmio_value, u64 access_mask)
+ {
+ 	BUG_ON((u64)(unsigned)access_mask != access_mask);
+ 	BUG_ON((mmio_mask & mmio_value) != mmio_value);
++	WARN_ON(mmio_value & (shadow_nonpresent_or_rsvd_mask << shadow_nonpresent_or_rsvd_mask_len));
++	WARN_ON(mmio_value & shadow_nonpresent_or_rsvd_lower_gfn_mask);
+ 	shadow_mmio_value = mmio_value | SPTE_MMIO_MASK;
+ 	shadow_mmio_mask = mmio_mask | SPTE_SPECIAL_MASK;
+ 	shadow_mmio_access_mask = access_mask;
+@@ -591,16 +593,15 @@ static void kvm_mmu_reset_all_pte_masks(void)
+ 	 * the most significant bits of legal physical address space.
+ 	 */
+ 	shadow_nonpresent_or_rsvd_mask = 0;
+-	low_phys_bits = boot_cpu_data.x86_cache_bits;
+-	if (boot_cpu_data.x86_cache_bits <
+-	    52 - shadow_nonpresent_or_rsvd_mask_len) {
++	low_phys_bits = boot_cpu_data.x86_phys_bits;
++	if (boot_cpu_has_bug(X86_BUG_L1TF) &&
++	    !WARN_ON_ONCE(boot_cpu_data.x86_cache_bits >=
++			  52 - shadow_nonpresent_or_rsvd_mask_len)) {
++		low_phys_bits = boot_cpu_data.x86_cache_bits
++			- shadow_nonpresent_or_rsvd_mask_len;
+ 		shadow_nonpresent_or_rsvd_mask =
+-			rsvd_bits(boot_cpu_data.x86_cache_bits -
+-				  shadow_nonpresent_or_rsvd_mask_len,
+-				  boot_cpu_data.x86_cache_bits - 1);
+-		low_phys_bits -= shadow_nonpresent_or_rsvd_mask_len;
+-	} else
+-		WARN_ON_ONCE(boot_cpu_has_bug(X86_BUG_L1TF));
++			rsvd_bits(low_phys_bits, boot_cpu_data.x86_cache_bits - 1);
++	}
+ 
+ 	shadow_nonpresent_or_rsvd_lower_gfn_mask =
+ 		GENMASK_ULL(low_phys_bits - 1, PAGE_SHIFT);
+@@ -6131,25 +6132,16 @@ static void kvm_set_mmio_spte_mask(void)
+ 	u64 mask;
+ 
+ 	/*
+-	 * Set the reserved bits and the present bit of an paging-structure
+-	 * entry to generate page fault with PFER.RSV = 1.
+-	 */
+-
+-	/*
+-	 * Mask the uppermost physical address bit, which would be reserved as
+-	 * long as the supported physical address width is less than 52.
++	 * Set a reserved PA bit in MMIO SPTEs to generate page faults with
++	 * PFEC.RSVD=1 on MMIO accesses.  64-bit PTEs (PAE, x86-64, and EPT
++	 * paging) support a maximum of 52 bits of PA, i.e. if the CPU supports
++	 * 52-bit physical addresses then there are no reserved PA bits in the
++	 * PTEs and so the reserved PA approach must be disabled.
+ 	 */
+-	mask = 1ull << 51;
+-
+-	/* Set the present bit. */
+-	mask |= 1ull;
+-
+-	/*
+-	 * If reserved bit is not supported, clear the present bit to disable
+-	 * mmio page fault.
+-	 */
+-	if (shadow_phys_bits == 52)
+-		mask &= ~1ull;
++	if (shadow_phys_bits < 52)
++		mask = BIT_ULL(51) | PT_PRESENT_MASK;
++	else
++		mask = 0;
+ 
+ 	kvm_mmu_set_mmio_spte_mask(mask, mask, ACC_WRITE_MASK | ACC_USER_MASK);
+ }
+diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
+index c974c49221eb..eee7cb0e1d95 100644
+--- a/arch/x86/kvm/svm.c
++++ b/arch/x86/kvm/svm.c
+@@ -3236,8 +3236,8 @@ static int nested_svm_exit_special(struct vcpu_svm *svm)
+ 			return NESTED_EXIT_HOST;
+ 		break;
+ 	case SVM_EXIT_EXCP_BASE + PF_VECTOR:
+-		/* When we're shadowing, trap PFs, but not async PF */
+-		if (!npt_enabled && svm->vcpu.arch.apf.host_apf_reason == 0)
++		/* Trap async PF even if not shadowing */
++		if (!npt_enabled || svm->vcpu.arch.apf.host_apf_reason)
+ 			return NESTED_EXIT_HOST;
+ 		break;
+ 	default:
+@@ -3326,7 +3326,7 @@ static inline void copy_vmcb_control_area(struct vmcb *dst_vmcb, struct vmcb *fr
+ 	dst->iopm_base_pa         = from->iopm_base_pa;
+ 	dst->msrpm_base_pa        = from->msrpm_base_pa;
+ 	dst->tsc_offset           = from->tsc_offset;
+-	dst->asid                 = from->asid;
++	/* asid not copied, it is handled manually for svm->vmcb.  */
+ 	dst->tlb_ctl              = from->tlb_ctl;
+ 	dst->int_ctl              = from->int_ctl;
+ 	dst->int_vector           = from->int_vector;
+diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
+index 3a2f05ef51fa..a03db4a75977 100644
+--- a/arch/x86/kvm/vmx/nested.c
++++ b/arch/x86/kvm/vmx/nested.c
+@@ -303,7 +303,7 @@ static void vmx_switch_vmcs(struct kvm_vcpu *vcpu, struct loaded_vmcs *vmcs)
+ 	cpu = get_cpu();
+ 	prev = vmx->loaded_vmcs;
+ 	vmx->loaded_vmcs = vmcs;
+-	vmx_vcpu_load_vmcs(vcpu, cpu);
++	vmx_vcpu_load_vmcs(vcpu, cpu, prev);
+ 	vmx_sync_vmcs_host_state(vmx, prev);
+ 	put_cpu();
+ 
+@@ -5562,7 +5562,7 @@ bool nested_vmx_exit_reflected(struct kvm_vcpu *vcpu, u32 exit_reason)
+ 				vmcs_read32(VM_EXIT_INTR_ERROR_CODE),
+ 				KVM_ISA_VMX);
+ 
+-	switch (exit_reason) {
++	switch ((u16)exit_reason) {
+ 	case EXIT_REASON_EXCEPTION_NMI:
+ 		if (is_nmi(intr_info))
+ 			return false;
+diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
+index a83c94a971ee..b29902c521f2 100644
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -1314,10 +1314,12 @@ after_clear_sn:
+ 		pi_set_on(pi_desc);
+ }
+ 
+-void vmx_vcpu_load_vmcs(struct kvm_vcpu *vcpu, int cpu)
++void vmx_vcpu_load_vmcs(struct kvm_vcpu *vcpu, int cpu,
++			struct loaded_vmcs *buddy)
+ {
+ 	struct vcpu_vmx *vmx = to_vmx(vcpu);
+ 	bool already_loaded = vmx->loaded_vmcs->cpu == cpu;
++	struct vmcs *prev;
+ 
+ 	if (!already_loaded) {
+ 		loaded_vmcs_clear(vmx->loaded_vmcs);
+@@ -1336,10 +1338,18 @@ void vmx_vcpu_load_vmcs(struct kvm_vcpu *vcpu, int cpu)
+ 		local_irq_enable();
+ 	}
+ 
+-	if (per_cpu(current_vmcs, cpu) != vmx->loaded_vmcs->vmcs) {
++	prev = per_cpu(current_vmcs, cpu);
++	if (prev != vmx->loaded_vmcs->vmcs) {
+ 		per_cpu(current_vmcs, cpu) = vmx->loaded_vmcs->vmcs;
+ 		vmcs_load(vmx->loaded_vmcs->vmcs);
+-		indirect_branch_prediction_barrier();
++
++		/*
++		 * No indirect branch prediction barrier needed when switching
++		 * the active VMCS within a guest, e.g. on nested VM-Enter.
++		 * The L1 VMM can protect itself with retpolines, IBPB or IBRS.
++		 */
++		if (!buddy || WARN_ON_ONCE(buddy->vmcs != prev))
++			indirect_branch_prediction_barrier();
+ 	}
+ 
+ 	if (!already_loaded) {
+@@ -1376,7 +1386,7 @@ void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
+ {
+ 	struct vcpu_vmx *vmx = to_vmx(vcpu);
+ 
+-	vmx_vcpu_load_vmcs(vcpu, cpu);
++	vmx_vcpu_load_vmcs(vcpu, cpu, NULL);
+ 
+ 	vmx_vcpu_pi_load(vcpu, cpu);
+ 
+diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
+index e64da06c7009..ff7361aa824c 100644
+--- a/arch/x86/kvm/vmx/vmx.h
++++ b/arch/x86/kvm/vmx/vmx.h
+@@ -320,7 +320,8 @@ struct kvm_vmx {
+ };
+ 
+ bool nested_vmx_allowed(struct kvm_vcpu *vcpu);
+-void vmx_vcpu_load_vmcs(struct kvm_vcpu *vcpu, int cpu);
++void vmx_vcpu_load_vmcs(struct kvm_vcpu *vcpu, int cpu,
++			struct loaded_vmcs *buddy);
+ void vmx_vcpu_load(struct kvm_vcpu *vcpu, int cpu);
+ int allocate_vpid(void);
+ void free_vpid(int vpid);
+diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
+index 7f3371a39ed0..4b4a8a4e0251 100644
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -4568,7 +4568,7 @@ long kvm_arch_vcpu_ioctl(struct file *filp,
+ 
+ 		if (kvm_state.flags &
+ 		    ~(KVM_STATE_NESTED_RUN_PENDING | KVM_STATE_NESTED_GUEST_MODE
+-		      | KVM_STATE_NESTED_EVMCS))
++		      | KVM_STATE_NESTED_EVMCS | KVM_STATE_NESTED_MTF_PENDING))
+ 			break;
+ 
+ 		/* nested_run_pending implies guest_mode.  */
+@@ -6908,7 +6908,7 @@ restart:
+ 		if (!ctxt->have_exception ||
+ 		    exception_type(ctxt->exception.vector) == EXCPT_TRAP) {
+ 			kvm_rip_write(vcpu, ctxt->eip);
+-			if (r && ctxt->tf)
++			if (r && (ctxt->tf || (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP)))
+ 				r = kvm_vcpu_do_singlestep(vcpu);
+ 			if (kvm_x86_ops->update_emulated_instruction)
+ 				kvm_x86_ops->update_emulated_instruction(vcpu);
+@@ -8115,9 +8115,8 @@ static void vcpu_load_eoi_exitmap(struct kvm_vcpu *vcpu)
+ 	kvm_x86_ops->load_eoi_exitmap(vcpu, eoi_exit_bitmap);
+ }
+ 
+-int kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm,
+-		unsigned long start, unsigned long end,
+-		bool blockable)
++void kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm,
++					    unsigned long start, unsigned long end)
+ {
+ 	unsigned long apic_address;
+ 
+@@ -8128,8 +8127,6 @@ int kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm,
+ 	apic_address = gfn_to_hva(kvm, APIC_DEFAULT_PHYS_BASE >> PAGE_SHIFT);
+ 	if (start <= apic_address && apic_address < end)
+ 		kvm_make_all_cpus_request(kvm, KVM_REQ_APIC_PAGE_RELOAD);
+-
+-	return 0;
+ }
+ 
+ void kvm_vcpu_reload_apic_access_page(struct kvm_vcpu *vcpu)
+diff --git a/arch/x86/mm/dump_pagetables.c b/arch/x86/mm/dump_pagetables.c
+index 69309cd56fdf..33093fdedb02 100644
+--- a/arch/x86/mm/dump_pagetables.c
++++ b/arch/x86/mm/dump_pagetables.c
+@@ -249,10 +249,22 @@ static void note_wx(struct pg_state *st, unsigned long addr)
+ 		  (void *)st->start_address);
+ }
+ 
+-static inline pgprotval_t effective_prot(pgprotval_t prot1, pgprotval_t prot2)
++static void effective_prot(struct ptdump_state *pt_st, int level, u64 val)
+ {
+-	return (prot1 & prot2 & (_PAGE_USER | _PAGE_RW)) |
+-	       ((prot1 | prot2) & _PAGE_NX);
++	struct pg_state *st = container_of(pt_st, struct pg_state, ptdump);
++	pgprotval_t prot = val & PTE_FLAGS_MASK;
++	pgprotval_t effective;
++
++	if (level > 0) {
++		pgprotval_t higher_prot = st->prot_levels[level - 1];
++
++		effective = (higher_prot & prot & (_PAGE_USER | _PAGE_RW)) |
++			    ((higher_prot | prot) & _PAGE_NX);
++	} else {
++		effective = prot;
++	}
++
++	st->prot_levels[level] = effective;
+ }
+ 
+ /*
+@@ -270,16 +282,10 @@ static void note_page(struct ptdump_state *pt_st, unsigned long addr, int level,
+ 	struct seq_file *m = st->seq;
+ 
+ 	new_prot = val & PTE_FLAGS_MASK;
+-
+-	if (level > 0) {
+-		new_eff = effective_prot(st->prot_levels[level - 1],
+-					 new_prot);
+-	} else {
+-		new_eff = new_prot;
+-	}
+-
+-	if (level >= 0)
+-		st->prot_levels[level] = new_eff;
++	if (!val)
++		new_eff = 0;
++	else
++		new_eff = st->prot_levels[level];
+ 
+ 	/*
+ 	 * If we have a "break" in the series, we need to flush the state that
+@@ -374,6 +380,7 @@ static void ptdump_walk_pgd_level_core(struct seq_file *m,
+ 	struct pg_state st = {
+ 		.ptdump = {
+ 			.note_page	= note_page,
++			.effective_prot = effective_prot,
+ 			.range		= ptdump_ranges
+ 		},
+ 		.level = -1,
+diff --git a/arch/x86/pci/fixup.c b/arch/x86/pci/fixup.c
+index e723559c386a..0c67a5a94de3 100644
+--- a/arch/x86/pci/fixup.c
++++ b/arch/x86/pci/fixup.c
+@@ -572,6 +572,10 @@ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x2fc0, pci_invalid_bar);
+ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x6f60, pci_invalid_bar);
+ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x6fa0, pci_invalid_bar);
+ DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x6fc0, pci_invalid_bar);
++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0xa1ec, pci_invalid_bar);
++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0xa1ed, pci_invalid_bar);
++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0xa26c, pci_invalid_bar);
++DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0xa26d, pci_invalid_bar);
+ 
+ /*
+  * Device [1022:7808]
+diff --git a/crypto/algapi.c b/crypto/algapi.c
+index f8b4dc161c02..f1e6ccaff853 100644
+--- a/crypto/algapi.c
++++ b/crypto/algapi.c
+@@ -403,7 +403,7 @@ static void crypto_wait_for_test(struct crypto_larval *larval)
+ 	err = wait_for_completion_killable(&larval->completion);
+ 	WARN_ON(err);
+ 	if (!err)
+-		crypto_probing_notify(CRYPTO_MSG_ALG_LOADED, larval);
++		crypto_notify(CRYPTO_MSG_ALG_LOADED, larval);
+ 
+ out:
+ 	crypto_larval_kill(&larval->alg);
+diff --git a/crypto/drbg.c b/crypto/drbg.c
+index b6929eb5f565..04379ca624cd 100644
+--- a/crypto/drbg.c
++++ b/crypto/drbg.c
+@@ -1294,8 +1294,10 @@ static inline int drbg_alloc_state(struct drbg_state *drbg)
+ 	if (IS_ENABLED(CONFIG_CRYPTO_FIPS)) {
+ 		drbg->prev = kzalloc(drbg_sec_strength(drbg->core->flags),
+ 				     GFP_KERNEL);
+-		if (!drbg->prev)
++		if (!drbg->prev) {
++			ret = -ENOMEM;
+ 			goto fini;
++		}
+ 		drbg->fips_primed = false;
+ 	}
+ 
+diff --git a/drivers/acpi/cppc_acpi.c b/drivers/acpi/cppc_acpi.c
+index a1a858ad4d18..f9b1a2abdbe2 100644
+--- a/drivers/acpi/cppc_acpi.c
++++ b/drivers/acpi/cppc_acpi.c
+@@ -865,6 +865,7 @@ int acpi_cppc_processor_probe(struct acpi_processor *pr)
+ 			"acpi_cppc");
+ 	if (ret) {
+ 		per_cpu(cpc_desc_ptr, pr->id) = NULL;
++		kobject_put(&cpc_ptr->kobj);
+ 		goto out_free;
+ 	}
+ 
+diff --git a/drivers/acpi/device_pm.c b/drivers/acpi/device_pm.c
+index 5832bc10aca8..95e200b618bd 100644
+--- a/drivers/acpi/device_pm.c
++++ b/drivers/acpi/device_pm.c
+@@ -186,7 +186,7 @@ int acpi_device_set_power(struct acpi_device *device, int state)
+ 		 * possibly drop references to the power resources in use.
+ 		 */
+ 		state = ACPI_STATE_D3_HOT;
+-		/* If _PR3 is not available, use D3hot as the target state. */
++		/* If D3cold is not supported, use D3hot as the target state. */
+ 		if (!device->power.states[ACPI_STATE_D3_COLD].flags.valid)
+ 			target_state = state;
+ 	} else if (!device->power.states[state].flags.valid) {
+diff --git a/drivers/acpi/evged.c b/drivers/acpi/evged.c
+index aba0d0027586..6d7a522952bf 100644
+--- a/drivers/acpi/evged.c
++++ b/drivers/acpi/evged.c
+@@ -79,6 +79,8 @@ static acpi_status acpi_ged_request_interrupt(struct acpi_resource *ares,
+ 	struct resource r;
+ 	struct acpi_resource_irq *p = &ares->data.irq;
+ 	struct acpi_resource_extended_irq *pext = &ares->data.extended_irq;
++	char ev_name[5];
++	u8 trigger;
+ 
+ 	if (ares->type == ACPI_RESOURCE_TYPE_END_TAG)
+ 		return AE_OK;
+@@ -87,14 +89,28 @@ static acpi_status acpi_ged_request_interrupt(struct acpi_resource *ares,
+ 		dev_err(dev, "unable to parse IRQ resource\n");
+ 		return AE_ERROR;
+ 	}
+-	if (ares->type == ACPI_RESOURCE_TYPE_IRQ)
++	if (ares->type == ACPI_RESOURCE_TYPE_IRQ) {
+ 		gsi = p->interrupts[0];
+-	else
++		trigger = p->triggering;
++	} else {
+ 		gsi = pext->interrupts[0];
++		trigger = p->triggering;
++	}
+ 
+ 	irq = r.start;
+ 
+-	if (ACPI_FAILURE(acpi_get_handle(handle, "_EVT", &evt_handle))) {
++	switch (gsi) {
++	case 0 ... 255:
++		sprintf(ev_name, "_%c%02hhX",
++			trigger == ACPI_EDGE_SENSITIVE ? 'E' : 'L', gsi);
++
++		if (ACPI_SUCCESS(acpi_get_handle(handle, ev_name, &evt_handle)))
++			break;
++		/* fall through */
++	default:
++		if (ACPI_SUCCESS(acpi_get_handle(handle, "_EVT", &evt_handle)))
++			break;
++
+ 		dev_err(dev, "cannot locate _EVT method\n");
+ 		return AE_ERROR;
+ 	}
+diff --git a/drivers/acpi/scan.c b/drivers/acpi/scan.c
+index 6d3448895382..1b255e98de4d 100644
+--- a/drivers/acpi/scan.c
++++ b/drivers/acpi/scan.c
+@@ -919,12 +919,9 @@ static void acpi_bus_init_power_state(struct acpi_device *device, int state)
+ 
+ 		if (buffer.length && package
+ 		    && package->type == ACPI_TYPE_PACKAGE
+-		    && package->package.count) {
+-			int err = acpi_extract_power_resources(package, 0,
+-							       &ps->resources);
+-			if (!err)
+-				device->power.flags.power_resources = 1;
+-		}
++		    && package->package.count)
++			acpi_extract_power_resources(package, 0, &ps->resources);
++
+ 		ACPI_FREE(buffer.pointer);
+ 	}
+ 
+@@ -971,14 +968,27 @@ static void acpi_bus_get_power_flags(struct acpi_device *device)
+ 		acpi_bus_init_power_state(device, i);
+ 
+ 	INIT_LIST_HEAD(&device->power.states[ACPI_STATE_D3_COLD].resources);
+-	if (!list_empty(&device->power.states[ACPI_STATE_D3_HOT].resources))
+-		device->power.states[ACPI_STATE_D3_COLD].flags.valid = 1;
+ 
+-	/* Set defaults for D0 and D3hot states (always valid) */
++	/* Set the defaults for D0 and D3hot (always supported). */
+ 	device->power.states[ACPI_STATE_D0].flags.valid = 1;
+ 	device->power.states[ACPI_STATE_D0].power = 100;
+ 	device->power.states[ACPI_STATE_D3_HOT].flags.valid = 1;
+ 
++	/*
++	 * Use power resources only if the D0 list of them is populated, because
++	 * some platforms may provide _PR3 only to indicate D3cold support and
++	 * in those cases the power resources list returned by it may be bogus.
++	 */
++	if (!list_empty(&device->power.states[ACPI_STATE_D0].resources)) {
++		device->power.flags.power_resources = 1;
++		/*
++		 * D3cold is supported if the D3hot list of power resources is
++		 * not empty.
++		 */
++		if (!list_empty(&device->power.states[ACPI_STATE_D3_HOT].resources))
++			device->power.states[ACPI_STATE_D3_COLD].flags.valid = 1;
++	}
++
+ 	if (acpi_bus_init_power(device))
+ 		device->flags.power_manageable = 0;
+ }
+diff --git a/drivers/acpi/sysfs.c b/drivers/acpi/sysfs.c
+index c60d2c6d31d6..3a89909b50a6 100644
+--- a/drivers/acpi/sysfs.c
++++ b/drivers/acpi/sysfs.c
+@@ -993,8 +993,10 @@ void acpi_sysfs_add_hotplug_profile(struct acpi_hotplug_profile *hotplug,
+ 
+ 	error = kobject_init_and_add(&hotplug->kobj,
+ 		&acpi_hotplug_profile_ktype, hotplug_kobj, "%s", name);
+-	if (error)
++	if (error) {
++		kobject_put(&hotplug->kobj);
+ 		goto err_out;
++	}
+ 
+ 	kobject_uevent(&hotplug->kobj, KOBJ_ADD);
+ 	return;
+diff --git a/drivers/base/core.c b/drivers/base/core.c
+index 68277687c160..3c4ecb824247 100644
+--- a/drivers/base/core.c
++++ b/drivers/base/core.c
+@@ -643,9 +643,17 @@ static void device_links_missing_supplier(struct device *dev)
+ {
+ 	struct device_link *link;
+ 
+-	list_for_each_entry(link, &dev->links.suppliers, c_node)
+-		if (link->status == DL_STATE_CONSUMER_PROBE)
++	list_for_each_entry(link, &dev->links.suppliers, c_node) {
++		if (link->status != DL_STATE_CONSUMER_PROBE)
++			continue;
++
++		if (link->supplier->links.status == DL_DEV_DRIVER_BOUND) {
+ 			WRITE_ONCE(link->status, DL_STATE_AVAILABLE);
++		} else {
++			WARN_ON(!(link->flags & DL_FLAG_SYNC_STATE_ONLY));
++			WRITE_ONCE(link->status, DL_STATE_DORMANT);
++		}
++	}
+ }
+ 
+ /**
+@@ -684,11 +692,11 @@ int device_links_check_suppliers(struct device *dev)
+ 	device_links_write_lock();
+ 
+ 	list_for_each_entry(link, &dev->links.suppliers, c_node) {
+-		if (!(link->flags & DL_FLAG_MANAGED) ||
+-		    link->flags & DL_FLAG_SYNC_STATE_ONLY)
++		if (!(link->flags & DL_FLAG_MANAGED))
+ 			continue;
+ 
+-		if (link->status != DL_STATE_AVAILABLE) {
++		if (link->status != DL_STATE_AVAILABLE &&
++		    !(link->flags & DL_FLAG_SYNC_STATE_ONLY)) {
+ 			device_links_missing_supplier(dev);
+ 			ret = -EPROBE_DEFER;
+ 			break;
+@@ -949,11 +957,21 @@ static void __device_links_no_driver(struct device *dev)
+ 		if (!(link->flags & DL_FLAG_MANAGED))
+ 			continue;
+ 
+-		if (link->flags & DL_FLAG_AUTOREMOVE_CONSUMER)
++		if (link->flags & DL_FLAG_AUTOREMOVE_CONSUMER) {
+ 			device_link_drop_managed(link);
+-		else if (link->status == DL_STATE_CONSUMER_PROBE ||
+-			 link->status == DL_STATE_ACTIVE)
++			continue;
++		}
++
++		if (link->status != DL_STATE_CONSUMER_PROBE &&
++		    link->status != DL_STATE_ACTIVE)
++			continue;
++
++		if (link->supplier->links.status == DL_DEV_DRIVER_BOUND) {
+ 			WRITE_ONCE(link->status, DL_STATE_AVAILABLE);
++		} else {
++			WARN_ON(!(link->flags & DL_FLAG_SYNC_STATE_ONLY));
++			WRITE_ONCE(link->status, DL_STATE_DORMANT);
++		}
+ 	}
+ 
+ 	dev->links.status = DL_DEV_NO_DRIVER;
+diff --git a/drivers/block/floppy.c b/drivers/block/floppy.c
+index 8ef65c085640..c31ea3d18c8b 100644
+--- a/drivers/block/floppy.c
++++ b/drivers/block/floppy.c
+@@ -2902,17 +2902,17 @@ static blk_status_t floppy_queue_rq(struct blk_mq_hw_ctx *hctx,
+ 		 (unsigned long long) current_req->cmd_flags))
+ 		return BLK_STS_IOERR;
+ 
+-	spin_lock_irq(&floppy_lock);
+-	list_add_tail(&bd->rq->queuelist, &floppy_reqs);
+-	spin_unlock_irq(&floppy_lock);
+-
+ 	if (test_and_set_bit(0, &fdc_busy)) {
+ 		/* fdc busy, this new request will be treated when the
+ 		   current one is done */
+ 		is_alive(__func__, "old request running");
+-		return BLK_STS_OK;
++		return BLK_STS_RESOURCE;
+ 	}
+ 
++	spin_lock_irq(&floppy_lock);
++	list_add_tail(&bd->rq->queuelist, &floppy_reqs);
++	spin_unlock_irq(&floppy_lock);
++
+ 	command_status = FD_COMMAND_NONE;
+ 	__reschedule_timeout(MAXTIMEOUT, "fd_request");
+ 	set_fdc(0);
+diff --git a/drivers/char/agp/intel-gtt.c b/drivers/char/agp/intel-gtt.c
+index 66a62d17a3f5..3d42fc4290bc 100644
+--- a/drivers/char/agp/intel-gtt.c
++++ b/drivers/char/agp/intel-gtt.c
+@@ -846,6 +846,7 @@ void intel_gtt_insert_page(dma_addr_t addr,
+ 			   unsigned int flags)
+ {
+ 	intel_private.driver->write_entry(addr, pg, flags);
++	readl(intel_private.gtt + pg);
+ 	if (intel_private.driver->chipset_flush)
+ 		intel_private.driver->chipset_flush();
+ }
+@@ -871,7 +872,7 @@ void intel_gtt_insert_sg_entries(struct sg_table *st,
+ 			j++;
+ 		}
+ 	}
+-	wmb();
++	readl(intel_private.gtt + j - 1);
+ 	if (intel_private.driver->chipset_flush)
+ 		intel_private.driver->chipset_flush();
+ }
+@@ -1105,6 +1106,7 @@ static void i9xx_cleanup(void)
+ 
+ static void i9xx_chipset_flush(void)
+ {
++	wmb();
+ 	if (intel_private.i9xx_flush_page)
+ 		writel(1, intel_private.i9xx_flush_page);
+ }
+diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
+index f22b7aed6e64..006c58e32a5c 100644
+--- a/drivers/clk/clk.c
++++ b/drivers/clk/clk.c
+@@ -114,7 +114,11 @@ static int clk_pm_runtime_get(struct clk_core *core)
+ 		return 0;
+ 
+ 	ret = pm_runtime_get_sync(core->dev);
+-	return ret < 0 ? ret : 0;
++	if (ret < 0) {
++		pm_runtime_put_noidle(core->dev);
++		return ret;
++	}
++	return 0;
+ }
+ 
+ static void clk_pm_runtime_put(struct clk_core *core)
+diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
+index 808874bccf4a..347ea1ed260c 100644
+--- a/drivers/cpufreq/cpufreq.c
++++ b/drivers/cpufreq/cpufreq.c
+@@ -2515,26 +2515,27 @@ EXPORT_SYMBOL_GPL(cpufreq_update_limits);
+ static int cpufreq_boost_set_sw(int state)
+ {
+ 	struct cpufreq_policy *policy;
+-	int ret = -EINVAL;
+ 
+ 	for_each_active_policy(policy) {
++		int ret;
++
+ 		if (!policy->freq_table)
+-			continue;
++			return -ENXIO;
+ 
+ 		ret = cpufreq_frequency_table_cpuinfo(policy,
+ 						      policy->freq_table);
+ 		if (ret) {
+ 			pr_err("%s: Policy frequency update failed\n",
+ 			       __func__);
+-			break;
++			return ret;
+ 		}
+ 
+ 		ret = freq_qos_update_request(policy->max_freq_req, policy->max);
+ 		if (ret < 0)
+-			break;
++			return ret;
+ 	}
+ 
+-	return ret;
++	return 0;
+ }
+ 
+ int cpufreq_boost_trigger_state(int state)
+diff --git a/drivers/crypto/cavium/nitrox/nitrox_main.c b/drivers/crypto/cavium/nitrox/nitrox_main.c
+index c4632d84c9a1..637be2f903d3 100644
+--- a/drivers/crypto/cavium/nitrox/nitrox_main.c
++++ b/drivers/crypto/cavium/nitrox/nitrox_main.c
+@@ -278,7 +278,7 @@ static void nitrox_remove_from_devlist(struct nitrox_device *ndev)
+ 
+ struct nitrox_device *nitrox_get_first_device(void)
+ {
+-	struct nitrox_device *ndev = NULL;
++	struct nitrox_device *ndev;
+ 
+ 	mutex_lock(&devlist_lock);
+ 	list_for_each_entry(ndev, &ndevlist, list) {
+@@ -286,7 +286,7 @@ struct nitrox_device *nitrox_get_first_device(void)
+ 			break;
+ 	}
+ 	mutex_unlock(&devlist_lock);
+-	if (!ndev)
++	if (&ndev->list == &ndevlist)
+ 		return NULL;
+ 
+ 	refcount_inc(&ndev->refcnt);
+diff --git a/drivers/crypto/virtio/virtio_crypto_algs.c b/drivers/crypto/virtio/virtio_crypto_algs.c
+index fd045e64972a..cb8a6ea2a4bc 100644
+--- a/drivers/crypto/virtio/virtio_crypto_algs.c
++++ b/drivers/crypto/virtio/virtio_crypto_algs.c
+@@ -350,13 +350,18 @@ __virtio_crypto_skcipher_do_req(struct virtio_crypto_sym_request *vc_sym_req,
+ 	int err;
+ 	unsigned long flags;
+ 	struct scatterlist outhdr, iv_sg, status_sg, **sgs;
+-	int i;
+ 	u64 dst_len;
+ 	unsigned int num_out = 0, num_in = 0;
+ 	int sg_total;
+ 	uint8_t *iv;
++	struct scatterlist *sg;
+ 
+ 	src_nents = sg_nents_for_len(req->src, req->cryptlen);
++	if (src_nents < 0) {
++		pr_err("Invalid number of src SG.\n");
++		return src_nents;
++	}
++
+ 	dst_nents = sg_nents(req->dst);
+ 
+ 	pr_debug("virtio_crypto: Number of sgs (src_nents: %d, dst_nents: %d)\n",
+@@ -402,6 +407,7 @@ __virtio_crypto_skcipher_do_req(struct virtio_crypto_sym_request *vc_sym_req,
+ 		goto free;
+ 	}
+ 
++	dst_len = min_t(unsigned int, req->cryptlen, dst_len);
+ 	pr_debug("virtio_crypto: src_len: %u, dst_len: %llu\n",
+ 			req->cryptlen, dst_len);
+ 
+@@ -442,12 +448,12 @@ __virtio_crypto_skcipher_do_req(struct virtio_crypto_sym_request *vc_sym_req,
+ 	vc_sym_req->iv = iv;
+ 
+ 	/* Source data */
+-	for (i = 0; i < src_nents; i++)
+-		sgs[num_out++] = &req->src[i];
++	for (sg = req->src; src_nents; sg = sg_next(sg), src_nents--)
++		sgs[num_out++] = sg;
+ 
+ 	/* Destination data */
+-	for (i = 0; i < dst_nents; i++)
+-		sgs[num_out + num_in++] = &req->dst[i];
++	for (sg = req->dst; sg; sg = sg_next(sg))
++		sgs[num_out + num_in++] = sg;
+ 
+ 	/* Status */
+ 	sg_init_one(&status_sg, &vc_req->status, sizeof(vc_req->status));
+@@ -577,10 +583,11 @@ static void virtio_crypto_skcipher_finalize_req(
+ 		scatterwalk_map_and_copy(req->iv, req->dst,
+ 					 req->cryptlen - AES_BLOCK_SIZE,
+ 					 AES_BLOCK_SIZE, 0);
+-	crypto_finalize_skcipher_request(vc_sym_req->base.dataq->engine,
+-					   req, err);
+ 	kzfree(vc_sym_req->iv);
+ 	virtcrypto_clear_request(&vc_sym_req->base);
++
++	crypto_finalize_skcipher_request(vc_sym_req->base.dataq->engine,
++					   req, err);
+ }
+ 
+ static struct virtio_crypto_algo virtio_crypto_algs[] = { {
+diff --git a/drivers/edac/i10nm_base.c b/drivers/edac/i10nm_base.c
+index 059eccf0582b..50995f4c57a2 100644
+--- a/drivers/edac/i10nm_base.c
++++ b/drivers/edac/i10nm_base.c
+@@ -161,7 +161,7 @@ static int i10nm_get_dimm_config(struct mem_ctl_info *mci)
+ 				 mtr, mcddrtcfg, imc->mc, i, j);
+ 
+ 			if (IS_DIMM_PRESENT(mtr))
+-				ndimms += skx_get_dimm_info(mtr, 0, dimm,
++				ndimms += skx_get_dimm_info(mtr, 0, 0, dimm,
+ 							    imc, i, j);
+ 			else if (IS_NVDIMM_PRESENT(mcddrtcfg, j))
+ 				ndimms += skx_get_nvdimm_info(dimm, imc, i, j,
+diff --git a/drivers/edac/skx_base.c b/drivers/edac/skx_base.c
+index 83545b4facb7..7469650877aa 100644
+--- a/drivers/edac/skx_base.c
++++ b/drivers/edac/skx_base.c
+@@ -163,27 +163,23 @@ static const struct x86_cpu_id skx_cpuids[] = {
+ };
+ MODULE_DEVICE_TABLE(x86cpu, skx_cpuids);
+ 
+-#define SKX_GET_MTMTR(dev, reg) \
+-	pci_read_config_dword((dev), 0x87c, &(reg))
+-
+-static bool skx_check_ecc(struct pci_dev *pdev)
++static bool skx_check_ecc(u32 mcmtr)
+ {
+-	u32 mtmtr;
+-
+-	SKX_GET_MTMTR(pdev, mtmtr);
+-
+-	return !!GET_BITFIELD(mtmtr, 2, 2);
++	return !!GET_BITFIELD(mcmtr, 2, 2);
+ }
+ 
+ static int skx_get_dimm_config(struct mem_ctl_info *mci)
+ {
+ 	struct skx_pvt *pvt = mci->pvt_info;
++	u32 mtr, mcmtr, amap, mcddrtcfg;
+ 	struct skx_imc *imc = pvt->imc;
+-	u32 mtr, amap, mcddrtcfg;
+ 	struct dimm_info *dimm;
+ 	int i, j;
+ 	int ndimms;
+ 
++	/* Only the mcmtr on the first channel is effective */
++	pci_read_config_dword(imc->chan[0].cdev, 0x87c, &mcmtr);
++
+ 	for (i = 0; i < SKX_NUM_CHANNELS; i++) {
+ 		ndimms = 0;
+ 		pci_read_config_dword(imc->chan[i].cdev, 0x8C, &amap);
+@@ -193,14 +189,14 @@ static int skx_get_dimm_config(struct mem_ctl_info *mci)
+ 			pci_read_config_dword(imc->chan[i].cdev,
+ 					      0x80 + 4 * j, &mtr);
+ 			if (IS_DIMM_PRESENT(mtr)) {
+-				ndimms += skx_get_dimm_info(mtr, amap, dimm, imc, i, j);
++				ndimms += skx_get_dimm_info(mtr, mcmtr, amap, dimm, imc, i, j);
+ 			} else if (IS_NVDIMM_PRESENT(mcddrtcfg, j)) {
+ 				ndimms += skx_get_nvdimm_info(dimm, imc, i, j,
+ 							      EDAC_MOD_STR);
+ 				nvdimm_count++;
+ 			}
+ 		}
+-		if (ndimms && !skx_check_ecc(imc->chan[0].cdev)) {
++		if (ndimms && !skx_check_ecc(mcmtr)) {
+ 			skx_printk(KERN_ERR, "ECC is disabled on imc %d\n", imc->mc);
+ 			return -ENODEV;
+ 		}
+diff --git a/drivers/edac/skx_common.c b/drivers/edac/skx_common.c
+index 99bbaf629b8d..412c651bef26 100644
+--- a/drivers/edac/skx_common.c
++++ b/drivers/edac/skx_common.c
+@@ -304,7 +304,7 @@ static int skx_get_dimm_attr(u32 reg, int lobit, int hibit, int add,
+ #define numrow(reg)	skx_get_dimm_attr(reg, 2, 4, 12, 1, 6, "rows")
+ #define numcol(reg)	skx_get_dimm_attr(reg, 0, 1, 10, 0, 2, "cols")
+ 
+-int skx_get_dimm_info(u32 mtr, u32 amap, struct dimm_info *dimm,
++int skx_get_dimm_info(u32 mtr, u32 mcmtr, u32 amap, struct dimm_info *dimm,
+ 		      struct skx_imc *imc, int chan, int dimmno)
+ {
+ 	int  banks = 16, ranks, rows, cols, npages;
+@@ -324,8 +324,8 @@ int skx_get_dimm_info(u32 mtr, u32 amap, struct dimm_info *dimm,
+ 		 imc->mc, chan, dimmno, size, npages,
+ 		 banks, 1 << ranks, rows, cols);
+ 
+-	imc->chan[chan].dimms[dimmno].close_pg = GET_BITFIELD(mtr, 0, 0);
+-	imc->chan[chan].dimms[dimmno].bank_xor_enable = GET_BITFIELD(mtr, 9, 9);
++	imc->chan[chan].dimms[dimmno].close_pg = GET_BITFIELD(mcmtr, 0, 0);
++	imc->chan[chan].dimms[dimmno].bank_xor_enable = GET_BITFIELD(mcmtr, 9, 9);
+ 	imc->chan[chan].dimms[dimmno].fine_grain_bank = GET_BITFIELD(amap, 0, 0);
+ 	imc->chan[chan].dimms[dimmno].rowbits = rows;
+ 	imc->chan[chan].dimms[dimmno].colbits = cols;
+diff --git a/drivers/edac/skx_common.h b/drivers/edac/skx_common.h
+index 60d1ea669afd..319f9b2f1f89 100644
+--- a/drivers/edac/skx_common.h
++++ b/drivers/edac/skx_common.h
+@@ -128,7 +128,7 @@ int skx_get_all_bus_mappings(unsigned int did, int off, enum type,
+ 
+ int skx_get_hi_lo(unsigned int did, int off[], u64 *tolm, u64 *tohm);
+ 
+-int skx_get_dimm_info(u32 mtr, u32 amap, struct dimm_info *dimm,
++int skx_get_dimm_info(u32 mtr, u32 mcmtr, u32 amap, struct dimm_info *dimm,
+ 		      struct skx_imc *imc, int chan, int dimmno);
+ 
+ int skx_get_nvdimm_info(struct dimm_info *dimm, struct skx_imc *imc,
+diff --git a/drivers/firmware/efi/efivars.c b/drivers/firmware/efi/efivars.c
+index aff3dfb4d7ba..d187585db97a 100644
+--- a/drivers/firmware/efi/efivars.c
++++ b/drivers/firmware/efi/efivars.c
+@@ -522,8 +522,10 @@ efivar_create_sysfs_entry(struct efivar_entry *new_var)
+ 	ret = kobject_init_and_add(&new_var->kobj, &efivar_ktype,
+ 				   NULL, "%s", short_name);
+ 	kfree(short_name);
+-	if (ret)
++	if (ret) {
++		kobject_put(&new_var->kobj);
+ 		return ret;
++	}
+ 
+ 	kobject_uevent(&new_var->kobj, KOBJ_ADD);
+ 	if (efivar_entry_add(new_var, &efivar_sysfs_list)) {
+diff --git a/drivers/firmware/imx/imx-scu.c b/drivers/firmware/imx/imx-scu.c
+index f71eaa5bf52d..b3da2e193ad2 100644
+--- a/drivers/firmware/imx/imx-scu.c
++++ b/drivers/firmware/imx/imx-scu.c
+@@ -38,6 +38,7 @@ struct imx_sc_ipc {
+ 	struct device *dev;
+ 	struct mutex lock;
+ 	struct completion done;
++	bool fast_ipc;
+ 
+ 	/* temporarily store the SCU msg */
+ 	u32 *msg;
+@@ -115,6 +116,7 @@ static void imx_scu_rx_callback(struct mbox_client *c, void *msg)
+ 	struct imx_sc_ipc *sc_ipc = sc_chan->sc_ipc;
+ 	struct imx_sc_rpc_msg *hdr;
+ 	u32 *data = msg;
++	int i;
+ 
+ 	if (!sc_ipc->msg) {
+ 		dev_warn(sc_ipc->dev, "unexpected rx idx %d 0x%08x, ignore!\n",
+@@ -122,6 +124,19 @@ static void imx_scu_rx_callback(struct mbox_client *c, void *msg)
+ 		return;
+ 	}
+ 
++	if (sc_ipc->fast_ipc) {
++		hdr = msg;
++		sc_ipc->rx_size = hdr->size;
++		sc_ipc->msg[0] = *data++;
++
++		for (i = 1; i < sc_ipc->rx_size; i++)
++			sc_ipc->msg[i] = *data++;
++
++		complete(&sc_ipc->done);
++
++		return;
++	}
++
+ 	if (sc_chan->idx == 0) {
+ 		hdr = msg;
+ 		sc_ipc->rx_size = hdr->size;
+@@ -143,20 +158,22 @@ static void imx_scu_rx_callback(struct mbox_client *c, void *msg)
+ 
+ static int imx_scu_ipc_write(struct imx_sc_ipc *sc_ipc, void *msg)
+ {
+-	struct imx_sc_rpc_msg *hdr = msg;
++	struct imx_sc_rpc_msg hdr = *(struct imx_sc_rpc_msg *)msg;
+ 	struct imx_sc_chan *sc_chan;
+ 	u32 *data = msg;
+ 	int ret;
++	int size;
+ 	int i;
+ 
+ 	/* Check size */
+-	if (hdr->size > IMX_SC_RPC_MAX_MSG)
++	if (hdr.size > IMX_SC_RPC_MAX_MSG)
+ 		return -EINVAL;
+ 
+-	dev_dbg(sc_ipc->dev, "RPC SVC %u FUNC %u SIZE %u\n", hdr->svc,
+-		hdr->func, hdr->size);
++	dev_dbg(sc_ipc->dev, "RPC SVC %u FUNC %u SIZE %u\n", hdr.svc,
++		hdr.func, hdr.size);
+ 
+-	for (i = 0; i < hdr->size; i++) {
++	size = sc_ipc->fast_ipc ? 1 : hdr.size;
++	for (i = 0; i < size; i++) {
+ 		sc_chan = &sc_ipc->chans[i % 4];
+ 
+ 		/*
+@@ -168,8 +185,10 @@ static int imx_scu_ipc_write(struct imx_sc_ipc *sc_ipc, void *msg)
+ 		 * Wait for tx_done before every send to ensure that no
+ 		 * queueing happens at the mailbox channel level.
+ 		 */
+-		wait_for_completion(&sc_chan->tx_done);
+-		reinit_completion(&sc_chan->tx_done);
++		if (!sc_ipc->fast_ipc) {
++			wait_for_completion(&sc_chan->tx_done);
++			reinit_completion(&sc_chan->tx_done);
++		}
+ 
+ 		ret = mbox_send_message(sc_chan->ch, &data[i]);
+ 		if (ret < 0)
+@@ -246,6 +265,8 @@ static int imx_scu_probe(struct platform_device *pdev)
+ 	struct imx_sc_chan *sc_chan;
+ 	struct mbox_client *cl;
+ 	char *chan_name;
++	struct of_phandle_args args;
++	int num_channel;
+ 	int ret;
+ 	int i;
+ 
+@@ -253,11 +274,20 @@ static int imx_scu_probe(struct platform_device *pdev)
+ 	if (!sc_ipc)
+ 		return -ENOMEM;
+ 
+-	for (i = 0; i < SCU_MU_CHAN_NUM; i++) {
+-		if (i < 4)
++	ret = of_parse_phandle_with_args(pdev->dev.of_node, "mboxes",
++					 "#mbox-cells", 0, &args);
++	if (ret)
++		return ret;
++
++	sc_ipc->fast_ipc = of_device_is_compatible(args.np, "fsl,imx8-mu-scu");
++
++	num_channel = sc_ipc->fast_ipc ? 2 : SCU_MU_CHAN_NUM;
++	for (i = 0; i < num_channel; i++) {
++		if (i < num_channel / 2)
+ 			chan_name = kasprintf(GFP_KERNEL, "tx%d", i);
+ 		else
+-			chan_name = kasprintf(GFP_KERNEL, "rx%d", i - 4);
++			chan_name = kasprintf(GFP_KERNEL, "rx%d",
++					      i - num_channel / 2);
+ 
+ 		if (!chan_name)
+ 			return -ENOMEM;
+@@ -269,13 +299,15 @@ static int imx_scu_probe(struct platform_device *pdev)
+ 		cl->knows_txdone = true;
+ 		cl->rx_callback = imx_scu_rx_callback;
+ 
+-		/* Initial tx_done completion as "done" */
+-		cl->tx_done = imx_scu_tx_done;
+-		init_completion(&sc_chan->tx_done);
+-		complete(&sc_chan->tx_done);
++		if (!sc_ipc->fast_ipc) {
++			/* Initial tx_done completion as "done" */
++			cl->tx_done = imx_scu_tx_done;
++			init_completion(&sc_chan->tx_done);
++			complete(&sc_chan->tx_done);
++		}
+ 
+ 		sc_chan->sc_ipc = sc_ipc;
+-		sc_chan->idx = i % 4;
++		sc_chan->idx = i % (num_channel / 2);
+ 		sc_chan->ch = mbox_request_channel_byname(cl, chan_name);
+ 		if (IS_ERR(sc_chan->ch)) {
+ 			ret = PTR_ERR(sc_chan->ch);
+diff --git a/drivers/gpu/drm/amd/display/dc/core/dc.c b/drivers/gpu/drm/amd/display/dc/core/dc.c
+index 32a07665863f..fff95e6b46c7 100644
+--- a/drivers/gpu/drm/amd/display/dc/core/dc.c
++++ b/drivers/gpu/drm/amd/display/dc/core/dc.c
+@@ -1362,10 +1362,24 @@ bool dc_commit_state(struct dc *dc, struct dc_state *context)
+ 	return (result == DC_OK);
+ }
+ 
+-bool dc_is_hw_initialized(struct dc *dc)
++static bool is_flip_pending_in_pipes(struct dc *dc, struct dc_state *context)
+ {
+-	struct dc_bios *dcb = dc->ctx->dc_bios;
+-	return dcb->funcs->is_accelerated_mode(dcb);
++	int i;
++	struct pipe_ctx *pipe;
++
++	for (i = 0; i < MAX_PIPES; i++) {
++		pipe = &context->res_ctx.pipe_ctx[i];
++
++		if (!pipe->plane_state)
++			continue;
++
++		/* Must set to false to start with, due to OR in update function */
++		pipe->plane_state->status.is_flip_pending = false;
++		dc->hwss.update_pending_status(pipe);
++		if (pipe->plane_state->status.is_flip_pending)
++			return true;
++	}
++	return false;
+ }
+ 
+ bool dc_post_update_surfaces_to_stream(struct dc *dc)
+@@ -1378,6 +1392,9 @@ bool dc_post_update_surfaces_to_stream(struct dc *dc)
+ 
+ 	post_surface_trace(dc);
+ 
++	if (is_flip_pending_in_pipes(dc, context))
++		return true;
++
+ 	for (i = 0; i < dc->res_pool->pipe_count; i++)
+ 		if (context->res_ctx.pipe_ctx[i].stream == NULL ||
+ 		    context->res_ctx.pipe_ctx[i].plane_state == NULL) {
+diff --git a/drivers/gpu/drm/amd/display/dc/dc.h b/drivers/gpu/drm/amd/display/dc/dc.h
+index 8ff25b5dd2f6..e8d126890d7e 100644
+--- a/drivers/gpu/drm/amd/display/dc/dc.h
++++ b/drivers/gpu/drm/amd/display/dc/dc.h
+@@ -1075,7 +1075,6 @@ unsigned int dc_get_current_backlight_pwm(struct dc *dc);
+ unsigned int dc_get_target_backlight_pwm(struct dc *dc);
+ 
+ bool dc_is_dmcu_initialized(struct dc *dc);
+-bool dc_is_hw_initialized(struct dc *dc);
+ 
+ enum dc_status dc_set_clock(struct dc *dc, enum dc_clock_type clock_type, uint32_t clk_khz, uint32_t stepping);
+ void dc_get_clock(struct dc *dc, enum dc_clock_type clock_type, struct dc_clock_config *clock_cfg);
+diff --git a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
+index 580319b7bf1a..0bf3cb239bf0 100644
+--- a/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
++++ b/drivers/gpu/drm/i915/gem/i915_gem_userptr.c
+@@ -600,6 +600,14 @@ static int i915_gem_userptr_get_pages(struct drm_i915_gem_object *obj)
+ 				      GFP_KERNEL |
+ 				      __GFP_NORETRY |
+ 				      __GFP_NOWARN);
++		/*
++		 * Using __get_user_pages_fast() with a read-only
++		 * access is questionable. A read-only page may be
++		 * COW-broken, and then this might end up giving
++		 * the wrong side of the COW..
++		 *
++		 * We may or may not care.
++		 */
+ 		if (pvec) /* defer to worker if malloc fails */
+ 			pinned = __get_user_pages_fast(obj->userptr.ptr,
+ 						       num_pages,
+diff --git a/drivers/gpu/drm/vkms/vkms_drv.h b/drivers/gpu/drm/vkms/vkms_drv.h
+index 7d52e24564db..7fe2edd4d009 100644
+--- a/drivers/gpu/drm/vkms/vkms_drv.h
++++ b/drivers/gpu/drm/vkms/vkms_drv.h
+@@ -121,11 +121,6 @@ struct drm_plane *vkms_plane_init(struct vkms_device *vkmsdev,
+ 				  enum drm_plane_type type, int index);
+ 
+ /* Gem stuff */
+-struct drm_gem_object *vkms_gem_create(struct drm_device *dev,
+-				       struct drm_file *file,
+-				       u32 *handle,
+-				       u64 size);
+-
+ vm_fault_t vkms_gem_fault(struct vm_fault *vmf);
+ 
+ int vkms_dumb_create(struct drm_file *file, struct drm_device *dev,
+diff --git a/drivers/gpu/drm/vkms/vkms_gem.c b/drivers/gpu/drm/vkms/vkms_gem.c
+index 2e01186fb943..c541fec57566 100644
+--- a/drivers/gpu/drm/vkms/vkms_gem.c
++++ b/drivers/gpu/drm/vkms/vkms_gem.c
+@@ -97,10 +97,10 @@ vm_fault_t vkms_gem_fault(struct vm_fault *vmf)
+ 	return ret;
+ }
+ 
+-struct drm_gem_object *vkms_gem_create(struct drm_device *dev,
+-				       struct drm_file *file,
+-				       u32 *handle,
+-				       u64 size)
++static struct drm_gem_object *vkms_gem_create(struct drm_device *dev,
++					      struct drm_file *file,
++					      u32 *handle,
++					      u64 size)
+ {
+ 	struct vkms_gem_object *obj;
+ 	int ret;
+@@ -113,7 +113,6 @@ struct drm_gem_object *vkms_gem_create(struct drm_device *dev,
+ 		return ERR_CAST(obj);
+ 
+ 	ret = drm_gem_handle_create(file, &obj->gem, handle);
+-	drm_gem_object_put_unlocked(&obj->gem);
+ 	if (ret)
+ 		return ERR_PTR(ret);
+ 
+@@ -142,6 +141,8 @@ int vkms_dumb_create(struct drm_file *file, struct drm_device *dev,
+ 	args->size = gem_obj->size;
+ 	args->pitch = pitch;
+ 
++	drm_gem_object_put_unlocked(gem_obj);
++
+ 	DRM_DEBUG_DRIVER("Created object of size %lld\n", size);
+ 
+ 	return 0;
+diff --git a/drivers/infiniband/core/uverbs_main.c b/drivers/infiniband/core/uverbs_main.c
+index 1bab8de14757..b94572e9c24f 100644
+--- a/drivers/infiniband/core/uverbs_main.c
++++ b/drivers/infiniband/core/uverbs_main.c
+@@ -296,6 +296,8 @@ static __poll_t ib_uverbs_event_poll(struct ib_uverbs_event_queue *ev_queue,
+ 	spin_lock_irq(&ev_queue->lock);
+ 	if (!list_empty(&ev_queue->event_list))
+ 		pollflags = EPOLLIN | EPOLLRDNORM;
++	else if (ev_queue->is_closed)
++		pollflags = EPOLLERR;
+ 	spin_unlock_irq(&ev_queue->lock);
+ 
+ 	return pollflags;
+diff --git a/drivers/input/misc/axp20x-pek.c b/drivers/input/misc/axp20x-pek.c
+index c8f87df93a50..9c6386b2af33 100644
+--- a/drivers/input/misc/axp20x-pek.c
++++ b/drivers/input/misc/axp20x-pek.c
+@@ -205,8 +205,11 @@ ATTRIBUTE_GROUPS(axp20x);
+ 
+ static irqreturn_t axp20x_pek_irq(int irq, void *pwr)
+ {
+-	struct input_dev *idev = pwr;
+-	struct axp20x_pek *axp20x_pek = input_get_drvdata(idev);
++	struct axp20x_pek *axp20x_pek = pwr;
++	struct input_dev *idev = axp20x_pek->input;
++
++	if (!idev)
++		return IRQ_HANDLED;
+ 
+ 	/*
+ 	 * The power-button is connected to ground so a falling edge (dbf)
+@@ -225,22 +228,9 @@ static irqreturn_t axp20x_pek_irq(int irq, void *pwr)
+ static int axp20x_pek_probe_input_device(struct axp20x_pek *axp20x_pek,
+ 					 struct platform_device *pdev)
+ {
+-	struct axp20x_dev *axp20x = axp20x_pek->axp20x;
+ 	struct input_dev *idev;
+ 	int error;
+ 
+-	axp20x_pek->irq_dbr = platform_get_irq_byname(pdev, "PEK_DBR");
+-	if (axp20x_pek->irq_dbr < 0)
+-		return axp20x_pek->irq_dbr;
+-	axp20x_pek->irq_dbr = regmap_irq_get_virq(axp20x->regmap_irqc,
+-						  axp20x_pek->irq_dbr);
+-
+-	axp20x_pek->irq_dbf = platform_get_irq_byname(pdev, "PEK_DBF");
+-	if (axp20x_pek->irq_dbf < 0)
+-		return axp20x_pek->irq_dbf;
+-	axp20x_pek->irq_dbf = regmap_irq_get_virq(axp20x->regmap_irqc,
+-						  axp20x_pek->irq_dbf);
+-
+ 	axp20x_pek->input = devm_input_allocate_device(&pdev->dev);
+ 	if (!axp20x_pek->input)
+ 		return -ENOMEM;
+@@ -255,24 +245,6 @@ static int axp20x_pek_probe_input_device(struct axp20x_pek *axp20x_pek,
+ 
+ 	input_set_drvdata(idev, axp20x_pek);
+ 
+-	error = devm_request_any_context_irq(&pdev->dev, axp20x_pek->irq_dbr,
+-					     axp20x_pek_irq, 0,
+-					     "axp20x-pek-dbr", idev);
+-	if (error < 0) {
+-		dev_err(&pdev->dev, "Failed to request dbr IRQ#%d: %d\n",
+-			axp20x_pek->irq_dbr, error);
+-		return error;
+-	}
+-
+-	error = devm_request_any_context_irq(&pdev->dev, axp20x_pek->irq_dbf,
+-					  axp20x_pek_irq, 0,
+-					  "axp20x-pek-dbf", idev);
+-	if (error < 0) {
+-		dev_err(&pdev->dev, "Failed to request dbf IRQ#%d: %d\n",
+-			axp20x_pek->irq_dbf, error);
+-		return error;
+-	}
+-
+ 	error = input_register_device(idev);
+ 	if (error) {
+ 		dev_err(&pdev->dev, "Can't register input device: %d\n",
+@@ -280,8 +252,6 @@ static int axp20x_pek_probe_input_device(struct axp20x_pek *axp20x_pek,
+ 		return error;
+ 	}
+ 
+-	device_init_wakeup(&pdev->dev, true);
+-
+ 	return 0;
+ }
+ 
+@@ -339,6 +309,18 @@ static int axp20x_pek_probe(struct platform_device *pdev)
+ 
+ 	axp20x_pek->axp20x = dev_get_drvdata(pdev->dev.parent);
+ 
++	axp20x_pek->irq_dbr = platform_get_irq_byname(pdev, "PEK_DBR");
++	if (axp20x_pek->irq_dbr < 0)
++		return axp20x_pek->irq_dbr;
++	axp20x_pek->irq_dbr = regmap_irq_get_virq(
++			axp20x_pek->axp20x->regmap_irqc, axp20x_pek->irq_dbr);
++
++	axp20x_pek->irq_dbf = platform_get_irq_byname(pdev, "PEK_DBF");
++	if (axp20x_pek->irq_dbf < 0)
++		return axp20x_pek->irq_dbf;
++	axp20x_pek->irq_dbf = regmap_irq_get_virq(
++			axp20x_pek->axp20x->regmap_irqc, axp20x_pek->irq_dbf);
++
+ 	if (axp20x_pek_should_register_input(axp20x_pek, pdev)) {
+ 		error = axp20x_pek_probe_input_device(axp20x_pek, pdev);
+ 		if (error)
+@@ -347,6 +329,26 @@ static int axp20x_pek_probe(struct platform_device *pdev)
+ 
+ 	axp20x_pek->info = (struct axp20x_info *)match->driver_data;
+ 
++	error = devm_request_any_context_irq(&pdev->dev, axp20x_pek->irq_dbr,
++					     axp20x_pek_irq, 0,
++					     "axp20x-pek-dbr", axp20x_pek);
++	if (error < 0) {
++		dev_err(&pdev->dev, "Failed to request dbr IRQ#%d: %d\n",
++			axp20x_pek->irq_dbr, error);
++		return error;
++	}
++
++	error = devm_request_any_context_irq(&pdev->dev, axp20x_pek->irq_dbf,
++					  axp20x_pek_irq, 0,
++					  "axp20x-pek-dbf", axp20x_pek);
++	if (error < 0) {
++		dev_err(&pdev->dev, "Failed to request dbf IRQ#%d: %d\n",
++			axp20x_pek->irq_dbf, error);
++		return error;
++	}
++
++	device_init_wakeup(&pdev->dev, true);
++
+ 	platform_set_drvdata(pdev, axp20x_pek);
+ 
+ 	return 0;
+diff --git a/drivers/input/mouse/synaptics.c b/drivers/input/mouse/synaptics.c
+index 4d2036209b45..758dae8d6500 100644
+--- a/drivers/input/mouse/synaptics.c
++++ b/drivers/input/mouse/synaptics.c
+@@ -170,6 +170,7 @@ static const char * const smbus_pnp_ids[] = {
+ 	"LEN005b", /* P50 */
+ 	"LEN005e", /* T560 */
+ 	"LEN006c", /* T470s */
++	"LEN007a", /* T470s */
+ 	"LEN0071", /* T480 */
+ 	"LEN0072", /* X1 Carbon Gen 5 (2017) - Elan/ALPS trackpoint */
+ 	"LEN0073", /* X1 Carbon G5 (Elantech) */
+diff --git a/drivers/input/touchscreen/mms114.c b/drivers/input/touchscreen/mms114.c
+index 69c6d559eeb0..2ef1adaed9af 100644
+--- a/drivers/input/touchscreen/mms114.c
++++ b/drivers/input/touchscreen/mms114.c
+@@ -91,15 +91,15 @@ static int __mms114_read_reg(struct mms114_data *data, unsigned int reg,
+ 	if (reg <= MMS114_MODE_CONTROL && reg + len > MMS114_MODE_CONTROL)
+ 		BUG();
+ 
+-	/* Write register: use repeated start */
++	/* Write register */
+ 	xfer[0].addr = client->addr;
+-	xfer[0].flags = I2C_M_TEN | I2C_M_NOSTART;
++	xfer[0].flags = client->flags & I2C_M_TEN;
+ 	xfer[0].len = 1;
+ 	xfer[0].buf = &buf;
+ 
+ 	/* Read data */
+ 	xfer[1].addr = client->addr;
+-	xfer[1].flags = I2C_M_RD;
++	xfer[1].flags = (client->flags & I2C_M_TEN) | I2C_M_RD;
+ 	xfer[1].len = len;
+ 	xfer[1].buf = val;
+ 
+@@ -428,10 +428,8 @@ static int mms114_probe(struct i2c_client *client,
+ 	const void *match_data;
+ 	int error;
+ 
+-	if (!i2c_check_functionality(client->adapter,
+-				I2C_FUNC_PROTOCOL_MANGLING)) {
+-		dev_err(&client->dev,
+-			"Need i2c bus that supports protocol mangling\n");
++	if (!i2c_check_functionality(client->adapter, I2C_FUNC_I2C)) {
++		dev_err(&client->dev, "Not supported I2C adapter\n");
+ 		return -ENODEV;
+ 	}
+ 
+diff --git a/drivers/media/common/videobuf2/videobuf2-dma-contig.c b/drivers/media/common/videobuf2/videobuf2-dma-contig.c
+index d0c9dffe49e5..a26d43aa7595 100644
+--- a/drivers/media/common/videobuf2/videobuf2-dma-contig.c
++++ b/drivers/media/common/videobuf2/videobuf2-dma-contig.c
+@@ -726,9 +726,8 @@ EXPORT_SYMBOL_GPL(vb2_dma_contig_memops);
+ int vb2_dma_contig_set_max_seg_size(struct device *dev, unsigned int size)
+ {
+ 	if (!dev->dma_parms) {
+-		dev->dma_parms = kzalloc(sizeof(*dev->dma_parms), GFP_KERNEL);
+-		if (!dev->dma_parms)
+-			return -ENOMEM;
++		dev_err(dev, "Failed to set max_seg_size: dma_parms is NULL\n");
++		return -ENODEV;
+ 	}
+ 	if (dma_get_max_seg_size(dev) < size)
+ 		return dma_set_max_seg_size(dev, size);
+@@ -737,21 +736,6 @@ int vb2_dma_contig_set_max_seg_size(struct device *dev, unsigned int size)
+ }
+ EXPORT_SYMBOL_GPL(vb2_dma_contig_set_max_seg_size);
+ 
+-/*
+- * vb2_dma_contig_clear_max_seg_size() - release resources for DMA parameters
+- * @dev:	device for configuring DMA parameters
+- *
+- * This function releases resources allocated to configure DMA parameters
+- * (see vb2_dma_contig_set_max_seg_size() function). It should be called from
+- * device drivers on driver remove.
+- */
+-void vb2_dma_contig_clear_max_seg_size(struct device *dev)
+-{
+-	kfree(dev->dma_parms);
+-	dev->dma_parms = NULL;
+-}
+-EXPORT_SYMBOL_GPL(vb2_dma_contig_clear_max_seg_size);
+-
+ MODULE_DESCRIPTION("DMA-contig memory handling routines for videobuf2");
+ MODULE_AUTHOR("Pawel Osciak <pawel@osciak.com>");
+ MODULE_LICENSE("GPL");
+diff --git a/drivers/mmc/core/sdio.c b/drivers/mmc/core/sdio.c
+index ebb387aa5158..20eed28ea60d 100644
+--- a/drivers/mmc/core/sdio.c
++++ b/drivers/mmc/core/sdio.c
+@@ -584,7 +584,7 @@ try_again:
+ 	 */
+ 	err = mmc_send_io_op_cond(host, ocr, &rocr);
+ 	if (err)
+-		goto err;
++		return err;
+ 
+ 	/*
+ 	 * For SPI, enable CRC as appropriate.
+@@ -592,17 +592,15 @@ try_again:
+ 	if (mmc_host_is_spi(host)) {
+ 		err = mmc_spi_set_crc(host, use_spi_crc);
+ 		if (err)
+-			goto err;
++			return err;
+ 	}
+ 
+ 	/*
+ 	 * Allocate card structure.
+ 	 */
+ 	card = mmc_alloc_card(host, NULL);
+-	if (IS_ERR(card)) {
+-		err = PTR_ERR(card);
+-		goto err;
+-	}
++	if (IS_ERR(card))
++		return PTR_ERR(card);
+ 
+ 	if ((rocr & R4_MEMORY_PRESENT) &&
+ 	    mmc_sd_get_cid(host, ocr & rocr, card->raw_cid, NULL) == 0) {
+@@ -610,19 +608,15 @@ try_again:
+ 
+ 		if (oldcard && (oldcard->type != MMC_TYPE_SD_COMBO ||
+ 		    memcmp(card->raw_cid, oldcard->raw_cid, sizeof(card->raw_cid)) != 0)) {
+-			mmc_remove_card(card);
+-			pr_debug("%s: Perhaps the card was replaced\n",
+-				mmc_hostname(host));
+-			return -ENOENT;
++			err = -ENOENT;
++			goto mismatch;
+ 		}
+ 	} else {
+ 		card->type = MMC_TYPE_SDIO;
+ 
+ 		if (oldcard && oldcard->type != MMC_TYPE_SDIO) {
+-			mmc_remove_card(card);
+-			pr_debug("%s: Perhaps the card was replaced\n",
+-				mmc_hostname(host));
+-			return -ENOENT;
++			err = -ENOENT;
++			goto mismatch;
+ 		}
+ 	}
+ 
+@@ -677,7 +671,7 @@ try_again:
+ 	if (!oldcard && card->type == MMC_TYPE_SD_COMBO) {
+ 		err = mmc_sd_get_csd(host, card);
+ 		if (err)
+-			return err;
++			goto remove;
+ 
+ 		mmc_decode_cid(card);
+ 	}
+@@ -704,7 +698,12 @@ try_again:
+ 			mmc_set_timing(card->host, MMC_TIMING_SD_HS);
+ 		}
+ 
+-		goto finish;
++		if (oldcard)
++			mmc_remove_card(card);
++		else
++			host->card = card;
++
++		return 0;
+ 	}
+ 
+ 	/*
+@@ -718,9 +717,8 @@ try_again:
+ 			/* Retry init sequence, but without R4_18V_PRESENT. */
+ 			retries = 0;
+ 			goto try_again;
+-		} else {
+-			goto remove;
+ 		}
++		return err;
+ 	}
+ 
+ 	/*
+@@ -731,16 +729,14 @@ try_again:
+ 		goto remove;
+ 
+ 	if (oldcard) {
+-		int same = (card->cis.vendor == oldcard->cis.vendor &&
+-			    card->cis.device == oldcard->cis.device);
+-		mmc_remove_card(card);
+-		if (!same) {
+-			pr_debug("%s: Perhaps the card was replaced\n",
+-				mmc_hostname(host));
+-			return -ENOENT;
++		if (card->cis.vendor == oldcard->cis.vendor &&
++		    card->cis.device == oldcard->cis.device) {
++			mmc_remove_card(card);
++			card = oldcard;
++		} else {
++			err = -ENOENT;
++			goto mismatch;
+ 		}
+-
+-		card = oldcard;
+ 	}
+ 	card->ocr = ocr_card;
+ 	mmc_fixup_device(card, sdio_fixup_methods);
+@@ -801,16 +797,15 @@ try_again:
+ 		err = -EINVAL;
+ 		goto remove;
+ 	}
+-finish:
+-	if (!oldcard)
+-		host->card = card;
++
++	host->card = card;
+ 	return 0;
+ 
++mismatch:
++	pr_debug("%s: Perhaps the card was replaced\n", mmc_hostname(host));
+ remove:
+-	if (!oldcard)
++	if (oldcard != card)
+ 		mmc_remove_card(card);
+-
+-err:
+ 	return err;
+ }
+ 
+diff --git a/drivers/mmc/host/mmci_stm32_sdmmc.c b/drivers/mmc/host/mmci_stm32_sdmmc.c
+index 01f222758910..966303291b8f 100644
+--- a/drivers/mmc/host/mmci_stm32_sdmmc.c
++++ b/drivers/mmc/host/mmci_stm32_sdmmc.c
+@@ -162,6 +162,9 @@ static int sdmmc_idma_start(struct mmci_host *host, unsigned int *datactrl)
+ static void sdmmc_idma_finalize(struct mmci_host *host, struct mmc_data *data)
+ {
+ 	writel_relaxed(0, host->base + MMCI_STM32_IDMACTRLR);
++
++	if (!data->host_cookie)
++		sdmmc_idma_unprep_data(host, data, 0);
+ }
+ 
+ static void mmci_sdmmc_set_clkreg(struct mmci_host *host, unsigned int desired)
+diff --git a/drivers/mmc/host/sdhci-msm.c b/drivers/mmc/host/sdhci-msm.c
+index b68dcd1b0d50..ab358d8e82fa 100644
+--- a/drivers/mmc/host/sdhci-msm.c
++++ b/drivers/mmc/host/sdhci-msm.c
+@@ -1117,6 +1117,12 @@ static int sdhci_msm_execute_tuning(struct mmc_host *mmc, u32 opcode)
+ 	/* Clock-Data-Recovery used to dynamically adjust RX sampling point */
+ 	msm_host->use_cdr = true;
+ 
++	/*
++	 * Clear tuning_done flag before tuning to ensure proper
++	 * HS400 settings.
++	 */
++	msm_host->tuning_done = 0;
++
+ 	/*
+ 	 * For HS400 tuning in HS200 timing requires:
+ 	 * - select MCLK/2 in VENDOR_SPEC
+diff --git a/drivers/mmc/host/sdhci-of-at91.c b/drivers/mmc/host/sdhci-of-at91.c
+index fcef5c0d0908..b6cb205d2d95 100644
+--- a/drivers/mmc/host/sdhci-of-at91.c
++++ b/drivers/mmc/host/sdhci-of-at91.c
+@@ -136,9 +136,12 @@ static void sdhci_at91_reset(struct sdhci_host *host, u8 mask)
+ 	    || mmc_gpio_get_cd(host->mmc) >= 0)
+ 		sdhci_at91_set_force_card_detect(host);
+ 
+-	if (priv->cal_always_on && (mask & SDHCI_RESET_ALL))
+-		sdhci_writel(host, SDMMC_CALCR_ALWYSON | SDMMC_CALCR_EN,
++	if (priv->cal_always_on && (mask & SDHCI_RESET_ALL)) {
++		u32 calcr = sdhci_readl(host, SDMMC_CALCR);
++
++		sdhci_writel(host, calcr | SDMMC_CALCR_ALWYSON | SDMMC_CALCR_EN,
+ 			     SDMMC_CALCR);
++	}
+ }
+ 
+ static const struct sdhci_ops sdhci_at91_sama5d2_ops = {
+diff --git a/drivers/mmc/host/tmio_mmc_core.c b/drivers/mmc/host/tmio_mmc_core.c
+index 1e424bcdbd5f..735941f81b95 100644
+--- a/drivers/mmc/host/tmio_mmc_core.c
++++ b/drivers/mmc/host/tmio_mmc_core.c
+@@ -1286,12 +1286,14 @@ void tmio_mmc_host_remove(struct tmio_mmc_host *host)
+ 	cancel_work_sync(&host->done);
+ 	cancel_delayed_work_sync(&host->delayed_reset_work);
+ 	tmio_mmc_release_dma(host);
++	tmio_mmc_disable_mmc_irqs(host, TMIO_MASK_ALL);
+ 
+-	pm_runtime_dont_use_autosuspend(&pdev->dev);
+ 	if (host->native_hotplug)
+ 		pm_runtime_put_noidle(&pdev->dev);
+-	pm_runtime_put_sync(&pdev->dev);
++
+ 	pm_runtime_disable(&pdev->dev);
++	pm_runtime_dont_use_autosuspend(&pdev->dev);
++	pm_runtime_put_noidle(&pdev->dev);
+ }
+ EXPORT_SYMBOL_GPL(tmio_mmc_host_remove);
+ 
+diff --git a/drivers/mmc/host/uniphier-sd.c b/drivers/mmc/host/uniphier-sd.c
+index a1683c49cb90..f82baf99fd69 100644
+--- a/drivers/mmc/host/uniphier-sd.c
++++ b/drivers/mmc/host/uniphier-sd.c
+@@ -610,11 +610,6 @@ static int uniphier_sd_probe(struct platform_device *pdev)
+ 		}
+ 	}
+ 
+-	ret = devm_request_irq(dev, irq, tmio_mmc_irq, IRQF_SHARED,
+-			       dev_name(dev), host);
+-	if (ret)
+-		goto free_host;
+-
+ 	if (priv->caps & UNIPHIER_SD_CAP_EXTENDED_IP)
+ 		host->dma_ops = &uniphier_sd_internal_dma_ops;
+ 	else
+@@ -642,8 +637,15 @@ static int uniphier_sd_probe(struct platform_device *pdev)
+ 	if (ret)
+ 		goto free_host;
+ 
++	ret = devm_request_irq(dev, irq, tmio_mmc_irq, IRQF_SHARED,
++			       dev_name(dev), host);
++	if (ret)
++		goto remove_host;
++
+ 	return 0;
+ 
++remove_host:
++	tmio_mmc_host_remove(host);
+ free_host:
+ 	tmio_mmc_host_free(host);
+ 
+diff --git a/drivers/net/dsa/qca8k.c b/drivers/net/dsa/qca8k.c
+index 9f4205b4439b..d2b5ab403e06 100644
+--- a/drivers/net/dsa/qca8k.c
++++ b/drivers/net/dsa/qca8k.c
+@@ -1079,8 +1079,7 @@ qca8k_sw_probe(struct mdio_device *mdiodev)
+ 	if (id != QCA8K_ID_QCA8337)
+ 		return -ENODEV;
+ 
+-	priv->ds = devm_kzalloc(&mdiodev->dev, sizeof(*priv->ds),
+-				QCA8K_NUM_PORTS);
++	priv->ds = devm_kzalloc(&mdiodev->dev, sizeof(*priv->ds), GFP_KERNEL);
+ 	if (!priv->ds)
+ 		return -ENOMEM;
+ 
+diff --git a/drivers/net/ethernet/amazon/ena/ena_netdev.c b/drivers/net/ethernet/amazon/ena/ena_netdev.c
+index cada6e7e30f4..5f6892aa6588 100644
+--- a/drivers/net/ethernet/amazon/ena/ena_netdev.c
++++ b/drivers/net/ethernet/amazon/ena/ena_netdev.c
+@@ -358,7 +358,7 @@ error_unmap_dma:
+ 	ena_unmap_tx_buff(xdp_ring, tx_info);
+ 	tx_info->xdpf = NULL;
+ error_drop_packet:
+-
++	__free_page(tx_info->xdp_rx_page);
+ 	return NETDEV_TX_OK;
+ }
+ 
+@@ -1642,11 +1642,9 @@ static int ena_clean_rx_irq(struct ena_ring *rx_ring, struct napi_struct *napi,
+ 					 &next_to_clean);
+ 
+ 		if (unlikely(!skb)) {
+-			if (xdp_verdict == XDP_TX) {
++			if (xdp_verdict == XDP_TX)
+ 				ena_free_rx_page(rx_ring,
+ 						 &rx_ring->rx_buffer_info[rx_ring->ena_bufs[0].req_id]);
+-				res_budget--;
+-			}
+ 			for (i = 0; i < ena_rx_ctx.descs; i++) {
+ 				rx_ring->free_ids[next_to_clean] =
+ 					rx_ring->ena_bufs[i].req_id;
+@@ -1654,8 +1652,10 @@ static int ena_clean_rx_irq(struct ena_ring *rx_ring, struct napi_struct *napi,
+ 					ENA_RX_RING_IDX_NEXT(next_to_clean,
+ 							     rx_ring->ring_size);
+ 			}
+-			if (xdp_verdict == XDP_TX || xdp_verdict == XDP_DROP)
++			if (xdp_verdict != XDP_PASS) {
++				res_budget--;
+ 				continue;
++			}
+ 			break;
+ 		}
+ 
+diff --git a/drivers/net/ethernet/cadence/macb_main.c b/drivers/net/ethernet/cadence/macb_main.c
+index f42382c2ecd0..9067b413d6b7 100644
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -2545,19 +2545,21 @@ static int macb_open(struct net_device *dev)
+ 
+ 	err = macb_phylink_connect(bp);
+ 	if (err)
+-		goto pm_exit;
++		goto napi_exit;
+ 
+ 	netif_tx_start_all_queues(dev);
+ 
+ 	if (bp->ptp_info)
+ 		bp->ptp_info->ptp_init(dev);
+ 
+-pm_exit:
+-	if (err) {
+-		pm_runtime_put_sync(&bp->pdev->dev);
+-		return err;
+-	}
+ 	return 0;
++
++napi_exit:
++	for (q = 0, queue = bp->queues; q < bp->num_queues; ++q, ++queue)
++		napi_disable(&queue->napi);
++pm_exit:
++	pm_runtime_put_sync(&bp->pdev->dev);
++	return err;
+ }
+ 
+ static int macb_close(struct net_device *dev)
+diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
+index 3de549c6c693..197dc5b2c090 100644
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -4678,12 +4678,10 @@ static void ibmvnic_handle_crq(union ibmvnic_crq *crq,
+ 			dev_err(dev, "Error %ld in VERSION_EXCHG_RSP\n", rc);
+ 			break;
+ 		}
+-		dev_info(dev, "Partner protocol version is %d\n",
+-			 crq->version_exchange_rsp.version);
+-		if (be16_to_cpu(crq->version_exchange_rsp.version) <
+-		    ibmvnic_version)
+-			ibmvnic_version =
++		ibmvnic_version =
+ 			    be16_to_cpu(crq->version_exchange_rsp.version);
++		dev_info(dev, "Partner protocol version is %d\n",
++			 ibmvnic_version);
+ 		send_cap_queries(adapter);
+ 		break;
+ 	case QUERY_CAPABILITY_RSP:
+diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
+index 11babc79dc6c..14318dca6921 100644
+--- a/drivers/net/ethernet/marvell/mvneta.c
++++ b/drivers/net/ethernet/marvell/mvneta.c
+@@ -418,11 +418,17 @@ struct mvneta_pcpu_port {
+ 	u32			cause_rx_tx;
+ };
+ 
++enum {
++	__MVNETA_DOWN,
++};
++
+ struct mvneta_port {
+ 	u8 id;
+ 	struct mvneta_pcpu_port __percpu	*ports;
+ 	struct mvneta_pcpu_stats __percpu	*stats;
+ 
++	unsigned long state;
++
+ 	int pkt_size;
+ 	void __iomem *base;
+ 	struct mvneta_rx_queue *rxqs;
+@@ -2066,6 +2072,9 @@ mvneta_xdp_xmit(struct net_device *dev, int num_frame,
+ 	int i, drops = 0;
+ 	u32 ret;
+ 
++	if (unlikely(test_bit(__MVNETA_DOWN, &pp->state)))
++		return -ENETDOWN;
++
+ 	if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK))
+ 		return -EINVAL;
+ 
+@@ -3489,12 +3498,16 @@ static void mvneta_start_dev(struct mvneta_port *pp)
+ 
+ 	phylink_start(pp->phylink);
+ 	netif_tx_start_all_queues(pp->dev);
++
++	clear_bit(__MVNETA_DOWN, &pp->state);
+ }
+ 
+ static void mvneta_stop_dev(struct mvneta_port *pp)
+ {
+ 	unsigned int cpu;
+ 
++	set_bit(__MVNETA_DOWN, &pp->state);
++
+ 	phylink_stop(pp->phylink);
+ 
+ 	if (!pp->neta_armada3700) {
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/devlink.c b/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
+index 184c3eaefbcb..c190eb267f3c 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/devlink.c
+@@ -256,7 +256,6 @@ int mlx5_devlink_register(struct devlink *devlink, struct device *dev)
+ 		goto params_reg_err;
+ 	mlx5_devlink_set_params_init_values(devlink);
+ 	devlink_params_publish(devlink);
+-	devlink_reload_enable(devlink);
+ 	return 0;
+ 
+ params_reg_err:
+@@ -266,7 +265,6 @@ params_reg_err:
+ 
+ void mlx5_devlink_unregister(struct devlink *devlink)
+ {
+-	devlink_reload_disable(devlink);
+ 	devlink_params_unregister(devlink, mlx5_devlink_params,
+ 				  ARRAY_SIZE(mlx5_devlink_params));
+ 	devlink_unregister(devlink);
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c
+index c28cbae42331..2c80205dc939 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/setup.c
+@@ -152,6 +152,10 @@ void mlx5e_close_xsk(struct mlx5e_channel *c)
+ 	mlx5e_close_cq(&c->xskicosq.cq);
+ 	mlx5e_close_xdpsq(&c->xsksq);
+ 	mlx5e_close_cq(&c->xsksq.cq);
++
++	memset(&c->xskrq, 0, sizeof(c->xskrq));
++	memset(&c->xsksq, 0, sizeof(c->xsksq));
++	memset(&c->xskicosq, 0, sizeof(c->xskicosq));
+ }
+ 
+ void mlx5e_activate_xsk(struct mlx5e_channel *c)
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/health.c b/drivers/net/ethernet/mellanox/mlx5/core/health.c
+index 68e7ef7ca52d..ffb360fe44d3 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/health.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/health.c
+@@ -193,15 +193,23 @@ static bool reset_fw_if_needed(struct mlx5_core_dev *dev)
+ 
+ void mlx5_enter_error_state(struct mlx5_core_dev *dev, bool force)
+ {
++	bool err_detected = false;
++
++	/* Mark the device as fatal in order to abort FW commands */
++	if ((check_fatal_sensors(dev) || force) &&
++	    dev->state == MLX5_DEVICE_STATE_UP) {
++		dev->state = MLX5_DEVICE_STATE_INTERNAL_ERROR;
++		err_detected = true;
++	}
+ 	mutex_lock(&dev->intf_state_mutex);
+-	if (dev->state == MLX5_DEVICE_STATE_INTERNAL_ERROR)
+-		goto unlock;
++	if (!err_detected && dev->state == MLX5_DEVICE_STATE_INTERNAL_ERROR)
++		goto unlock;/* a previous error is still being handled */
+ 	if (dev->state == MLX5_DEVICE_STATE_UNINITIALIZED) {
+ 		dev->state = MLX5_DEVICE_STATE_INTERNAL_ERROR;
+ 		goto unlock;
+ 	}
+ 
+-	if (check_fatal_sensors(dev) || force) {
++	if (check_fatal_sensors(dev) || force) { /* protected state setting */
+ 		dev->state = MLX5_DEVICE_STATE_INTERNAL_ERROR;
+ 		mlx5_cmd_flush(dev);
+ 	}
+diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+index 20e12e14cfa8..743491babf88 100644
+--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
++++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
+@@ -794,6 +794,11 @@ err_disable:
+ 
+ static void mlx5_pci_close(struct mlx5_core_dev *dev)
+ {
++	/* health work might still be active, and it needs pci bar in
++	 * order to know the NIC state. Therefore, drain the health WQ
++	 * before removing the pci bars
++	 */
++	mlx5_drain_health_wq(dev);
+ 	iounmap(dev->iseg);
+ 	pci_clear_master(dev->pdev);
+ 	release_bar(dev->pdev);
+@@ -1366,6 +1371,7 @@ static int init_one(struct pci_dev *pdev, const struct pci_device_id *id)
+ 		dev_err(&pdev->dev, "mlx5_crdump_enable failed with error code %d\n", err);
+ 
+ 	pci_save_state(pdev);
++	devlink_reload_enable(devlink);
+ 	return 0;
+ 
+ err_load_one:
+@@ -1383,6 +1389,7 @@ static void remove_one(struct pci_dev *pdev)
+ 	struct mlx5_core_dev *dev  = pci_get_drvdata(pdev);
+ 	struct devlink *devlink = priv_to_devlink(dev);
+ 
++	devlink_reload_disable(devlink);
+ 	mlx5_crdump_disable(dev);
+ 	mlx5_devlink_unregister(devlink);
+ 
+diff --git a/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c b/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c
+index ce0a6837daa3..05f8d5a92862 100644
+--- a/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c
++++ b/drivers/net/ethernet/mellanox/mlxsw/core_thermal.c
+@@ -391,8 +391,7 @@ static int mlxsw_thermal_set_trip_hyst(struct thermal_zone_device *tzdev,
+ static int mlxsw_thermal_trend_get(struct thermal_zone_device *tzdev,
+ 				   int trip, enum thermal_trend *trend)
+ {
+-	struct mlxsw_thermal_module *tz = tzdev->devdata;
+-	struct mlxsw_thermal *thermal = tz->parent;
++	struct mlxsw_thermal *thermal = tzdev->devdata;
+ 
+ 	if (trip < 0 || trip >= MLXSW_THERMAL_NUM_TRIPS)
+ 		return -EINVAL;
+@@ -593,6 +592,22 @@ mlxsw_thermal_module_trip_hyst_set(struct thermal_zone_device *tzdev, int trip,
+ 	return 0;
+ }
+ 
++static int mlxsw_thermal_module_trend_get(struct thermal_zone_device *tzdev,
++					  int trip, enum thermal_trend *trend)
++{
++	struct mlxsw_thermal_module *tz = tzdev->devdata;
++	struct mlxsw_thermal *thermal = tz->parent;
++
++	if (trip < 0 || trip >= MLXSW_THERMAL_NUM_TRIPS)
++		return -EINVAL;
++
++	if (tzdev == thermal->tz_highest_dev)
++		return 1;
++
++	*trend = THERMAL_TREND_STABLE;
++	return 0;
++}
++
+ static struct thermal_zone_device_ops mlxsw_thermal_module_ops = {
+ 	.bind		= mlxsw_thermal_module_bind,
+ 	.unbind		= mlxsw_thermal_module_unbind,
+@@ -604,7 +619,7 @@ static struct thermal_zone_device_ops mlxsw_thermal_module_ops = {
+ 	.set_trip_temp	= mlxsw_thermal_module_trip_temp_set,
+ 	.get_trip_hyst	= mlxsw_thermal_module_trip_hyst_get,
+ 	.set_trip_hyst	= mlxsw_thermal_module_trip_hyst_set,
+-	.get_trend	= mlxsw_thermal_trend_get,
++	.get_trend	= mlxsw_thermal_module_trend_get,
+ };
+ 
+ static int mlxsw_thermal_gearbox_temp_get(struct thermal_zone_device *tzdev,
+@@ -643,7 +658,7 @@ static struct thermal_zone_device_ops mlxsw_thermal_gearbox_ops = {
+ 	.set_trip_temp	= mlxsw_thermal_module_trip_temp_set,
+ 	.get_trip_hyst	= mlxsw_thermal_module_trip_hyst_get,
+ 	.set_trip_hyst	= mlxsw_thermal_module_trip_hyst_set,
+-	.get_trend	= mlxsw_thermal_trend_get,
++	.get_trend	= mlxsw_thermal_module_trend_get,
+ };
+ 
+ static int mlxsw_thermal_get_max_state(struct thermal_cooling_device *cdev,
+diff --git a/drivers/net/net_failover.c b/drivers/net/net_failover.c
+index b16a1221d19b..fb182bec8f06 100644
+--- a/drivers/net/net_failover.c
++++ b/drivers/net/net_failover.c
+@@ -61,7 +61,8 @@ static int net_failover_open(struct net_device *dev)
+ 	return 0;
+ 
+ err_standby_open:
+-	dev_close(primary_dev);
++	if (primary_dev)
++		dev_close(primary_dev);
+ err_primary_open:
+ 	netif_tx_disable(dev);
+ 	return err;
+diff --git a/drivers/net/tun.c b/drivers/net/tun.c
+index 3063f2c9fa63..d720f15cb1dc 100644
+--- a/drivers/net/tun.c
++++ b/drivers/net/tun.c
+@@ -1908,8 +1908,11 @@ drop:
+ 		skb->dev = tun->dev;
+ 		break;
+ 	case IFF_TAP:
+-		if (!frags)
+-			skb->protocol = eth_type_trans(skb, tun->dev);
++		if (frags && !pskb_may_pull(skb, ETH_HLEN)) {
++			err = -ENOMEM;
++			goto drop;
++		}
++		skb->protocol = eth_type_trans(skb, tun->dev);
+ 		break;
+ 	}
+ 
+@@ -1966,9 +1969,12 @@ drop:
+ 	}
+ 
+ 	if (frags) {
++		u32 headlen;
++
+ 		/* Exercise flow dissector code path. */
+-		u32 headlen = eth_get_headlen(tun->dev, skb->data,
+-					      skb_headlen(skb));
++		skb_push(skb, ETH_HLEN);
++		headlen = eth_get_headlen(tun->dev, skb->data,
++					  skb_headlen(skb));
+ 
+ 		if (unlikely(headlen > skb_headlen(skb))) {
+ 			this_cpu_inc(tun->pcpu_stats->rx_dropped);
+diff --git a/drivers/net/vxlan.c b/drivers/net/vxlan.c
+index a5b415fed11e..779e56c43d27 100644
+--- a/drivers/net/vxlan.c
++++ b/drivers/net/vxlan.c
+@@ -1924,6 +1924,10 @@ static struct sk_buff *vxlan_na_create(struct sk_buff *request,
+ 	ns_olen = request->len - skb_network_offset(request) -
+ 		sizeof(struct ipv6hdr) - sizeof(*ns);
+ 	for (i = 0; i < ns_olen-1; i += (ns->opt[i+1]<<3)) {
++		if (!ns->opt[i + 1]) {
++			kfree_skb(reply);
++			return NULL;
++		}
+ 		if (ns->opt[i] == ND_OPT_SOURCE_LL_ADDR) {
+ 			daddr = ns->opt + i + sizeof(struct nd_opt_hdr);
+ 			break;
+diff --git a/drivers/net/wireless/ath/ath9k/hif_usb.c b/drivers/net/wireless/ath/ath9k/hif_usb.c
+index dd0c32379375..4ed21dad6a8e 100644
+--- a/drivers/net/wireless/ath/ath9k/hif_usb.c
++++ b/drivers/net/wireless/ath/ath9k/hif_usb.c
+@@ -612,6 +612,11 @@ static void ath9k_hif_usb_rx_stream(struct hif_device_usb *hif_dev,
+ 			hif_dev->remain_skb = nskb;
+ 			spin_unlock(&hif_dev->rx_lock);
+ 		} else {
++			if (pool_index == MAX_PKT_NUM_IN_TRANSFER) {
++				dev_err(&hif_dev->udev->dev,
++					"ath9k_htc: over RX MAX_PKT_NUM\n");
++				goto err;
++			}
+ 			nskb = __dev_alloc_skb(pkt_len + 32, GFP_ATOMIC);
+ 			if (!nskb) {
+ 				dev_err(&hif_dev->udev->dev,
+@@ -638,9 +643,9 @@ err:
+ 
+ static void ath9k_hif_usb_rx_cb(struct urb *urb)
+ {
+-	struct sk_buff *skb = (struct sk_buff *) urb->context;
+-	struct hif_device_usb *hif_dev =
+-		usb_get_intfdata(usb_ifnum_to_if(urb->dev, 0));
++	struct rx_buf *rx_buf = (struct rx_buf *)urb->context;
++	struct hif_device_usb *hif_dev = rx_buf->hif_dev;
++	struct sk_buff *skb = rx_buf->skb;
+ 	int ret;
+ 
+ 	if (!skb)
+@@ -680,14 +685,15 @@ resubmit:
+ 	return;
+ free:
+ 	kfree_skb(skb);
++	kfree(rx_buf);
+ }
+ 
+ static void ath9k_hif_usb_reg_in_cb(struct urb *urb)
+ {
+-	struct sk_buff *skb = (struct sk_buff *) urb->context;
++	struct rx_buf *rx_buf = (struct rx_buf *)urb->context;
++	struct hif_device_usb *hif_dev = rx_buf->hif_dev;
++	struct sk_buff *skb = rx_buf->skb;
+ 	struct sk_buff *nskb;
+-	struct hif_device_usb *hif_dev =
+-		usb_get_intfdata(usb_ifnum_to_if(urb->dev, 0));
+ 	int ret;
+ 
+ 	if (!skb)
+@@ -745,6 +751,7 @@ resubmit:
+ 	return;
+ free:
+ 	kfree_skb(skb);
++	kfree(rx_buf);
+ 	urb->context = NULL;
+ }
+ 
+@@ -790,7 +797,7 @@ static int ath9k_hif_usb_alloc_tx_urbs(struct hif_device_usb *hif_dev)
+ 	init_usb_anchor(&hif_dev->mgmt_submitted);
+ 
+ 	for (i = 0; i < MAX_TX_URB_NUM; i++) {
+-		tx_buf = kzalloc(sizeof(struct tx_buf), GFP_KERNEL);
++		tx_buf = kzalloc(sizeof(*tx_buf), GFP_KERNEL);
+ 		if (!tx_buf)
+ 			goto err;
+ 
+@@ -827,8 +834,9 @@ static void ath9k_hif_usb_dealloc_rx_urbs(struct hif_device_usb *hif_dev)
+ 
+ static int ath9k_hif_usb_alloc_rx_urbs(struct hif_device_usb *hif_dev)
+ {
+-	struct urb *urb = NULL;
++	struct rx_buf *rx_buf = NULL;
+ 	struct sk_buff *skb = NULL;
++	struct urb *urb = NULL;
+ 	int i, ret;
+ 
+ 	init_usb_anchor(&hif_dev->rx_submitted);
+@@ -836,6 +844,12 @@ static int ath9k_hif_usb_alloc_rx_urbs(struct hif_device_usb *hif_dev)
+ 
+ 	for (i = 0; i < MAX_RX_URB_NUM; i++) {
+ 
++		rx_buf = kzalloc(sizeof(*rx_buf), GFP_KERNEL);
++		if (!rx_buf) {
++			ret = -ENOMEM;
++			goto err_rxb;
++		}
++
+ 		/* Allocate URB */
+ 		urb = usb_alloc_urb(0, GFP_KERNEL);
+ 		if (urb == NULL) {
+@@ -850,11 +864,14 @@ static int ath9k_hif_usb_alloc_rx_urbs(struct hif_device_usb *hif_dev)
+ 			goto err_skb;
+ 		}
+ 
++		rx_buf->hif_dev = hif_dev;
++		rx_buf->skb = skb;
++
+ 		usb_fill_bulk_urb(urb, hif_dev->udev,
+ 				  usb_rcvbulkpipe(hif_dev->udev,
+ 						  USB_WLAN_RX_PIPE),
+ 				  skb->data, MAX_RX_BUF_SIZE,
+-				  ath9k_hif_usb_rx_cb, skb);
++				  ath9k_hif_usb_rx_cb, rx_buf);
+ 
+ 		/* Anchor URB */
+ 		usb_anchor_urb(urb, &hif_dev->rx_submitted);
+@@ -880,6 +897,8 @@ err_submit:
+ err_skb:
+ 	usb_free_urb(urb);
+ err_urb:
++	kfree(rx_buf);
++err_rxb:
+ 	ath9k_hif_usb_dealloc_rx_urbs(hif_dev);
+ 	return ret;
+ }
+@@ -891,14 +910,21 @@ static void ath9k_hif_usb_dealloc_reg_in_urbs(struct hif_device_usb *hif_dev)
+ 
+ static int ath9k_hif_usb_alloc_reg_in_urbs(struct hif_device_usb *hif_dev)
+ {
+-	struct urb *urb = NULL;
++	struct rx_buf *rx_buf = NULL;
+ 	struct sk_buff *skb = NULL;
++	struct urb *urb = NULL;
+ 	int i, ret;
+ 
+ 	init_usb_anchor(&hif_dev->reg_in_submitted);
+ 
+ 	for (i = 0; i < MAX_REG_IN_URB_NUM; i++) {
+ 
++		rx_buf = kzalloc(sizeof(*rx_buf), GFP_KERNEL);
++		if (!rx_buf) {
++			ret = -ENOMEM;
++			goto err_rxb;
++		}
++
+ 		/* Allocate URB */
+ 		urb = usb_alloc_urb(0, GFP_KERNEL);
+ 		if (urb == NULL) {
+@@ -913,11 +939,14 @@ static int ath9k_hif_usb_alloc_reg_in_urbs(struct hif_device_usb *hif_dev)
+ 			goto err_skb;
+ 		}
+ 
++		rx_buf->hif_dev = hif_dev;
++		rx_buf->skb = skb;
++
+ 		usb_fill_int_urb(urb, hif_dev->udev,
+ 				  usb_rcvintpipe(hif_dev->udev,
+ 						  USB_REG_IN_PIPE),
+ 				  skb->data, MAX_REG_IN_BUF_SIZE,
+-				  ath9k_hif_usb_reg_in_cb, skb, 1);
++				  ath9k_hif_usb_reg_in_cb, rx_buf, 1);
+ 
+ 		/* Anchor URB */
+ 		usb_anchor_urb(urb, &hif_dev->reg_in_submitted);
+@@ -943,6 +972,8 @@ err_submit:
+ err_skb:
+ 	usb_free_urb(urb);
+ err_urb:
++	kfree(rx_buf);
++err_rxb:
+ 	ath9k_hif_usb_dealloc_reg_in_urbs(hif_dev);
+ 	return ret;
+ }
+@@ -973,7 +1004,7 @@ err:
+ 	return -ENOMEM;
+ }
+ 
+-static void ath9k_hif_usb_dealloc_urbs(struct hif_device_usb *hif_dev)
++void ath9k_hif_usb_dealloc_urbs(struct hif_device_usb *hif_dev)
+ {
+ 	usb_kill_anchored_urbs(&hif_dev->regout_submitted);
+ 	ath9k_hif_usb_dealloc_reg_in_urbs(hif_dev);
+@@ -1341,8 +1372,9 @@ static void ath9k_hif_usb_disconnect(struct usb_interface *interface)
+ 
+ 	if (hif_dev->flags & HIF_USB_READY) {
+ 		ath9k_htc_hw_deinit(hif_dev->htc_handle, unplugged);
+-		ath9k_htc_hw_free(hif_dev->htc_handle);
+ 		ath9k_hif_usb_dev_deinit(hif_dev);
++		ath9k_destoy_wmi(hif_dev->htc_handle->drv_priv);
++		ath9k_htc_hw_free(hif_dev->htc_handle);
+ 	}
+ 
+ 	usb_set_intfdata(interface, NULL);
+diff --git a/drivers/net/wireless/ath/ath9k/hif_usb.h b/drivers/net/wireless/ath/ath9k/hif_usb.h
+index 7846916aa01d..5985aa15ca93 100644
+--- a/drivers/net/wireless/ath/ath9k/hif_usb.h
++++ b/drivers/net/wireless/ath/ath9k/hif_usb.h
+@@ -86,6 +86,11 @@ struct tx_buf {
+ 	struct list_head list;
+ };
+ 
++struct rx_buf {
++	struct sk_buff *skb;
++	struct hif_device_usb *hif_dev;
++};
++
+ #define HIF_USB_TX_STOP  BIT(0)
+ #define HIF_USB_TX_FLUSH BIT(1)
+ 
+@@ -133,5 +138,6 @@ struct hif_device_usb {
+ 
+ int ath9k_hif_usb_init(void);
+ void ath9k_hif_usb_exit(void);
++void ath9k_hif_usb_dealloc_urbs(struct hif_device_usb *hif_dev);
+ 
+ #endif /* HTC_USB_H */
+diff --git a/drivers/net/wireless/ath/ath9k/htc_drv_init.c b/drivers/net/wireless/ath/ath9k/htc_drv_init.c
+index d961095ab01f..40a065028ebe 100644
+--- a/drivers/net/wireless/ath/ath9k/htc_drv_init.c
++++ b/drivers/net/wireless/ath/ath9k/htc_drv_init.c
+@@ -931,8 +931,9 @@ err_init:
+ int ath9k_htc_probe_device(struct htc_target *htc_handle, struct device *dev,
+ 			   u16 devid, char *product, u32 drv_info)
+ {
+-	struct ieee80211_hw *hw;
++	struct hif_device_usb *hif_dev;
+ 	struct ath9k_htc_priv *priv;
++	struct ieee80211_hw *hw;
+ 	int ret;
+ 
+ 	hw = ieee80211_alloc_hw(sizeof(struct ath9k_htc_priv), &ath9k_htc_ops);
+@@ -967,7 +968,10 @@ int ath9k_htc_probe_device(struct htc_target *htc_handle, struct device *dev,
+ 	return 0;
+ 
+ err_init:
+-	ath9k_deinit_wmi(priv);
++	ath9k_stop_wmi(priv);
++	hif_dev = (struct hif_device_usb *)htc_handle->hif_dev;
++	ath9k_hif_usb_dealloc_urbs(hif_dev);
++	ath9k_destoy_wmi(priv);
+ err_free:
+ 	ieee80211_free_hw(hw);
+ 	return ret;
+@@ -982,7 +986,7 @@ void ath9k_htc_disconnect_device(struct htc_target *htc_handle, bool hotunplug)
+ 			htc_handle->drv_priv->ah->ah_flags |= AH_UNPLUGGED;
+ 
+ 		ath9k_deinit_device(htc_handle->drv_priv);
+-		ath9k_deinit_wmi(htc_handle->drv_priv);
++		ath9k_stop_wmi(htc_handle->drv_priv);
+ 		ieee80211_free_hw(htc_handle->drv_priv->hw);
+ 	}
+ }
+diff --git a/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c b/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c
+index 9cec5c216e1f..118e5550b10c 100644
+--- a/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c
++++ b/drivers/net/wireless/ath/ath9k/htc_drv_txrx.c
+@@ -999,9 +999,9 @@ static bool ath9k_rx_prepare(struct ath9k_htc_priv *priv,
+ 	 * which are not PHY_ERROR (short radar pulses have a length of 3)
+ 	 */
+ 	if (unlikely(!rs_datalen || (rs_datalen < 10 && !is_phyerr))) {
+-		ath_warn(common,
+-			 "Short RX data len, dropping (dlen: %d)\n",
+-			 rs_datalen);
++		ath_dbg(common, ANY,
++			"Short RX data len, dropping (dlen: %d)\n",
++			rs_datalen);
+ 		goto rx_next;
+ 	}
+ 
+diff --git a/drivers/net/wireless/ath/ath9k/htc_hst.c b/drivers/net/wireless/ath/ath9k/htc_hst.c
+index d091c8ebdcf0..d2e062eaf561 100644
+--- a/drivers/net/wireless/ath/ath9k/htc_hst.c
++++ b/drivers/net/wireless/ath/ath9k/htc_hst.c
+@@ -113,6 +113,9 @@ static void htc_process_conn_rsp(struct htc_target *target,
+ 
+ 	if (svc_rspmsg->status == HTC_SERVICE_SUCCESS) {
+ 		epid = svc_rspmsg->endpoint_id;
++		if (epid < 0 || epid >= ENDPOINT_MAX)
++			return;
++
+ 		service_id = be16_to_cpu(svc_rspmsg->service_id);
+ 		max_msglen = be16_to_cpu(svc_rspmsg->max_msg_len);
+ 		endpoint = &target->endpoint[epid];
+@@ -170,7 +173,6 @@ static int htc_config_pipe_credits(struct htc_target *target)
+ 	time_left = wait_for_completion_timeout(&target->cmd_wait, HZ);
+ 	if (!time_left) {
+ 		dev_err(target->dev, "HTC credit config timeout\n");
+-		kfree_skb(skb);
+ 		return -ETIMEDOUT;
+ 	}
+ 
+@@ -206,7 +208,6 @@ static int htc_setup_complete(struct htc_target *target)
+ 	time_left = wait_for_completion_timeout(&target->cmd_wait, HZ);
+ 	if (!time_left) {
+ 		dev_err(target->dev, "HTC start timeout\n");
+-		kfree_skb(skb);
+ 		return -ETIMEDOUT;
+ 	}
+ 
+@@ -279,7 +280,6 @@ int htc_connect_service(struct htc_target *target,
+ 	if (!time_left) {
+ 		dev_err(target->dev, "Service connection timeout for: %d\n",
+ 			service_connreq->service_id);
+-		kfree_skb(skb);
+ 		return -ETIMEDOUT;
+ 	}
+ 
+diff --git a/drivers/net/wireless/ath/ath9k/wmi.c b/drivers/net/wireless/ath/ath9k/wmi.c
+index cdc146091194..e7a3127395be 100644
+--- a/drivers/net/wireless/ath/ath9k/wmi.c
++++ b/drivers/net/wireless/ath/ath9k/wmi.c
+@@ -112,14 +112,17 @@ struct wmi *ath9k_init_wmi(struct ath9k_htc_priv *priv)
+ 	return wmi;
+ }
+ 
+-void ath9k_deinit_wmi(struct ath9k_htc_priv *priv)
++void ath9k_stop_wmi(struct ath9k_htc_priv *priv)
+ {
+ 	struct wmi *wmi = priv->wmi;
+ 
+ 	mutex_lock(&wmi->op_mutex);
+ 	wmi->stopped = true;
+ 	mutex_unlock(&wmi->op_mutex);
++}
+ 
++void ath9k_destoy_wmi(struct ath9k_htc_priv *priv)
++{
+ 	kfree(priv->wmi);
+ }
+ 
+@@ -336,7 +339,6 @@ int ath9k_wmi_cmd(struct wmi *wmi, enum wmi_cmd_id cmd_id,
+ 		ath_dbg(common, WMI, "Timeout waiting for WMI command: %s\n",
+ 			wmi_cmd_to_name(cmd_id));
+ 		mutex_unlock(&wmi->op_mutex);
+-		kfree_skb(skb);
+ 		return -ETIMEDOUT;
+ 	}
+ 
+diff --git a/drivers/net/wireless/ath/ath9k/wmi.h b/drivers/net/wireless/ath/ath9k/wmi.h
+index 380175d5ecd7..d8b912206232 100644
+--- a/drivers/net/wireless/ath/ath9k/wmi.h
++++ b/drivers/net/wireless/ath/ath9k/wmi.h
+@@ -179,7 +179,6 @@ struct wmi {
+ };
+ 
+ struct wmi *ath9k_init_wmi(struct ath9k_htc_priv *priv);
+-void ath9k_deinit_wmi(struct ath9k_htc_priv *priv);
+ int ath9k_wmi_connect(struct htc_target *htc, struct wmi *wmi,
+ 		      enum htc_endpoint_id *wmi_ctrl_epid);
+ int ath9k_wmi_cmd(struct wmi *wmi, enum wmi_cmd_id cmd_id,
+@@ -189,6 +188,8 @@ int ath9k_wmi_cmd(struct wmi *wmi, enum wmi_cmd_id cmd_id,
+ void ath9k_wmi_event_tasklet(unsigned long data);
+ void ath9k_fatal_work(struct work_struct *work);
+ void ath9k_wmi_event_drain(struct ath9k_htc_priv *priv);
++void ath9k_stop_wmi(struct ath9k_htc_priv *priv);
++void ath9k_destoy_wmi(struct ath9k_htc_priv *priv);
+ 
+ #define WMI_CMD(_wmi_cmd)						\
+ 	do {								\
+diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
+index d828ca835a98..fe9fbb74ce72 100644
+--- a/drivers/pci/pci.c
++++ b/drivers/pci/pci.c
+@@ -4616,10 +4616,10 @@ static bool pcie_wait_for_link_delay(struct pci_dev *pdev, bool active,
+ 
+ 	/*
+ 	 * Some controllers might not implement link active reporting. In this
+-	 * case, we wait for 1000 + 100 ms.
++	 * case, we wait for 1000 ms + any delay requested by the caller.
+ 	 */
+ 	if (!pdev->link_active_reporting) {
+-		msleep(1100);
++		msleep(timeout + delay);
+ 		return true;
+ 	}
+ 
+diff --git a/drivers/platform/x86/sony-laptop.c b/drivers/platform/x86/sony-laptop.c
+index fb088dd8529e..32fa60feaadb 100644
+--- a/drivers/platform/x86/sony-laptop.c
++++ b/drivers/platform/x86/sony-laptop.c
+@@ -757,33 +757,6 @@ static union acpi_object *__call_snc_method(acpi_handle handle, char *method,
+ 	return result;
+ }
+ 
+-static int sony_nc_int_call(acpi_handle handle, char *name, int *value,
+-		int *result)
+-{
+-	union acpi_object *object = NULL;
+-	if (value) {
+-		u64 v = *value;
+-		object = __call_snc_method(handle, name, &v);
+-	} else
+-		object = __call_snc_method(handle, name, NULL);
+-
+-	if (!object)
+-		return -EINVAL;
+-
+-	if (object->type != ACPI_TYPE_INTEGER) {
+-		pr_warn("Invalid acpi_object: expected 0x%x got 0x%x\n",
+-				ACPI_TYPE_INTEGER, object->type);
+-		kfree(object);
+-		return -EINVAL;
+-	}
+-
+-	if (result)
+-		*result = object->integer.value;
+-
+-	kfree(object);
+-	return 0;
+-}
+-
+ #define MIN(a, b)	(a > b ? b : a)
+ static int sony_nc_buffer_call(acpi_handle handle, char *name, u64 *value,
+ 		void *buffer, size_t buflen)
+@@ -795,17 +768,20 @@ static int sony_nc_buffer_call(acpi_handle handle, char *name, u64 *value,
+ 	if (!object)
+ 		return -EINVAL;
+ 
+-	if (object->type == ACPI_TYPE_BUFFER) {
++	if (!buffer) {
++		/* do nothing */
++	} else if (object->type == ACPI_TYPE_BUFFER) {
+ 		len = MIN(buflen, object->buffer.length);
++		memset(buffer, 0, buflen);
+ 		memcpy(buffer, object->buffer.pointer, len);
+ 
+ 	} else if (object->type == ACPI_TYPE_INTEGER) {
+ 		len = MIN(buflen, sizeof(object->integer.value));
++		memset(buffer, 0, buflen);
+ 		memcpy(buffer, &object->integer.value, len);
+ 
+ 	} else {
+-		pr_warn("Invalid acpi_object: expected 0x%x got 0x%x\n",
+-				ACPI_TYPE_BUFFER, object->type);
++		pr_warn("Unexpected acpi_object: 0x%x\n", object->type);
+ 		ret = -EINVAL;
+ 	}
+ 
+@@ -813,6 +789,23 @@ static int sony_nc_buffer_call(acpi_handle handle, char *name, u64 *value,
+ 	return ret;
+ }
+ 
++static int sony_nc_int_call(acpi_handle handle, char *name, int *value, int
++		*result)
++{
++	int ret;
++
++	if (value) {
++		u64 v = *value;
++
++		ret = sony_nc_buffer_call(handle, name, &v, result,
++				sizeof(*result));
++	} else {
++		ret =  sony_nc_buffer_call(handle, name, NULL, result,
++				sizeof(*result));
++	}
++	return ret;
++}
++
+ struct sony_nc_handles {
+ 	u16 cap[0x10];
+ 	struct device_attribute devattr;
+@@ -2295,7 +2288,12 @@ static void sony_nc_thermal_cleanup(struct platform_device *pd)
+ #ifdef CONFIG_PM_SLEEP
+ static void sony_nc_thermal_resume(void)
+ {
+-	unsigned int status = sony_nc_thermal_mode_get();
++	int status;
++
++	if (!th_handle)
++		return;
++
++	status = sony_nc_thermal_mode_get();
+ 
+ 	if (status != th_handle->mode)
+ 		sony_nc_thermal_mode_set(th_handle->mode);
+diff --git a/drivers/remoteproc/remoteproc_core.c b/drivers/remoteproc/remoteproc_core.c
+index 097f33e4f1f3..ba18f32bd0c4 100644
+--- a/drivers/remoteproc/remoteproc_core.c
++++ b/drivers/remoteproc/remoteproc_core.c
+@@ -510,7 +510,7 @@ static int rproc_handle_vdev(struct rproc *rproc, struct fw_rsc_vdev *rsc,
+ 
+ 	/* Initialise vdev subdevice */
+ 	snprintf(name, sizeof(name), "vdev%dbuffer", rvdev->index);
+-	rvdev->dev.parent = rproc->dev.parent;
++	rvdev->dev.parent = &rproc->dev;
+ 	rvdev->dev.dma_pfn_offset = rproc->dev.parent->dma_pfn_offset;
+ 	rvdev->dev.release = rproc_rvdev_release;
+ 	dev_set_name(&rvdev->dev, "%s#%s", dev_name(rvdev->dev.parent), name);
+diff --git a/drivers/remoteproc/remoteproc_virtio.c b/drivers/remoteproc/remoteproc_virtio.c
+index 31a62a0b470e..380d52672035 100644
+--- a/drivers/remoteproc/remoteproc_virtio.c
++++ b/drivers/remoteproc/remoteproc_virtio.c
+@@ -375,6 +375,18 @@ int rproc_add_virtio_dev(struct rproc_vdev *rvdev, int id)
+ 				goto out;
+ 			}
+ 		}
++	} else {
++		struct device_node *np = rproc->dev.parent->of_node;
++
++		/*
++		 * If we don't have dedicated buffer, just attempt to re-assign
++		 * the reserved memory from our parent. A default memory-region
++		 * at index 0 from the parent's memory-regions is assigned for
++		 * the rvdev dev to allocate from. Failure is non-critical and
++		 * the allocations will fall back to global pools, so don't
++		 * check return value either.
++		 */
++		of_reserved_mem_device_init_by_idx(dev, np, 0);
+ 	}
+ 
+ 	/* Allocate virtio device */
+diff --git a/drivers/scsi/lpfc/lpfc_ct.c b/drivers/scsi/lpfc/lpfc_ct.c
+index 58b35a1442c1..001b319a30ee 100644
+--- a/drivers/scsi/lpfc/lpfc_ct.c
++++ b/drivers/scsi/lpfc/lpfc_ct.c
+@@ -462,7 +462,6 @@ lpfc_prep_node_fc4type(struct lpfc_vport *vport, uint32_t Did, uint8_t fc4_type)
+ 	struct lpfc_nodelist *ndlp;
+ 
+ 	if ((vport->port_type != LPFC_NPIV_PORT) ||
+-	    (fc4_type == FC_TYPE_FCP) ||
+ 	    !(vport->ct_flags & FC_CT_RFF_ID) || !vport->cfg_restrict_login) {
+ 
+ 		ndlp = lpfc_setup_disc_node(vport, Did);
+diff --git a/drivers/scsi/megaraid/megaraid_sas.h b/drivers/scsi/megaraid/megaraid_sas.h
+index 83d8c4cb1ad5..98827363bc49 100644
+--- a/drivers/scsi/megaraid/megaraid_sas.h
++++ b/drivers/scsi/megaraid/megaraid_sas.h
+@@ -511,7 +511,7 @@ union MR_PROGRESS {
+  */
+ struct MR_PD_PROGRESS {
+ 	struct {
+-#ifndef MFI_BIG_ENDIAN
++#ifndef __BIG_ENDIAN_BITFIELD
+ 		u32     rbld:1;
+ 		u32     patrol:1;
+ 		u32     clear:1;
+@@ -537,7 +537,7 @@ struct MR_PD_PROGRESS {
+ 	};
+ 
+ 	struct {
+-#ifndef MFI_BIG_ENDIAN
++#ifndef __BIG_ENDIAN_BITFIELD
+ 		u32     rbld:1;
+ 		u32     patrol:1;
+ 		u32     clear:1;
+diff --git a/drivers/scsi/megaraid/megaraid_sas_fusion.c b/drivers/scsi/megaraid/megaraid_sas_fusion.c
+index b2ad96564484..03a6c86475c8 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_fusion.c
++++ b/drivers/scsi/megaraid/megaraid_sas_fusion.c
+@@ -4238,6 +4238,7 @@ void megasas_refire_mgmt_cmd(struct megasas_instance *instance,
+ 	struct fusion_context *fusion;
+ 	struct megasas_cmd *cmd_mfi;
+ 	union MEGASAS_REQUEST_DESCRIPTOR_UNION *req_desc;
++	struct MPI2_RAID_SCSI_IO_REQUEST *scsi_io_req;
+ 	u16 smid;
+ 	bool refire_cmd = 0;
+ 	u8 result;
+@@ -4305,6 +4306,11 @@ void megasas_refire_mgmt_cmd(struct megasas_instance *instance,
+ 			result = COMPLETE_CMD;
+ 		}
+ 
++		scsi_io_req = (struct MPI2_RAID_SCSI_IO_REQUEST *)
++				cmd_fusion->io_request;
++		if (scsi_io_req->Function == MPI2_FUNCTION_SCSI_TASK_MGMT)
++			result = RETURN_CMD;
++
+ 		switch (result) {
+ 		case REFIRE_CMD:
+ 			megasas_fire_cmd_fusion(instance, req_desc);
+@@ -4533,7 +4539,6 @@ megasas_issue_tm(struct megasas_instance *instance, u16 device_handle,
+ 	if (!timeleft) {
+ 		dev_err(&instance->pdev->dev,
+ 			"task mgmt type 0x%x timed out\n", type);
+-		cmd_mfi->flags |= DRV_DCMD_SKIP_REFIRE;
+ 		mutex_unlock(&instance->reset_mutex);
+ 		rc = megasas_reset_fusion(instance->host, MFI_IO_TIMEOUT_OCR);
+ 		mutex_lock(&instance->reset_mutex);
+diff --git a/drivers/scsi/megaraid/megaraid_sas_fusion.h b/drivers/scsi/megaraid/megaraid_sas_fusion.h
+index d57ecc7f88d8..30de4b01f703 100644
+--- a/drivers/scsi/megaraid/megaraid_sas_fusion.h
++++ b/drivers/scsi/megaraid/megaraid_sas_fusion.h
+@@ -774,7 +774,7 @@ struct MR_SPAN_BLOCK_INFO {
+ struct MR_CPU_AFFINITY_MASK {
+ 	union {
+ 		struct {
+-#ifndef MFI_BIG_ENDIAN
++#ifndef __BIG_ENDIAN_BITFIELD
+ 		u8 hw_path:1;
+ 		u8 cpu0:1;
+ 		u8 cpu1:1;
+@@ -866,7 +866,7 @@ struct MR_LD_RAID {
+ 	__le16     seqNum;
+ 
+ struct {
+-#ifndef MFI_BIG_ENDIAN
++#ifndef __BIG_ENDIAN_BITFIELD
+ 	u32 ldSyncRequired:1;
+ 	u32 regTypeReqOnReadIsValid:1;
+ 	u32 isEPD:1;
+@@ -889,7 +889,7 @@ struct {
+ 	/* 0x30 - 0x33, Logical block size for the LD */
+ 	u32 logical_block_length;
+ 	struct {
+-#ifndef MFI_BIG_ENDIAN
++#ifndef __BIG_ENDIAN_BITFIELD
+ 	/* 0x34, P_I_EXPONENT from READ CAPACITY 16 */
+ 	u32 ld_pi_exp:4;
+ 	/* 0x34, LOGICAL BLOCKS PER PHYSICAL
+diff --git a/drivers/spi/spi-bcm-qspi.c b/drivers/spi/spi-bcm-qspi.c
+index 23d295f36c80..c64be5e8fb8a 100644
+--- a/drivers/spi/spi-bcm-qspi.c
++++ b/drivers/spi/spi-bcm-qspi.c
+@@ -670,7 +670,7 @@ static void read_from_hw(struct bcm_qspi *qspi, int slots)
+ 			if (buf)
+ 				buf[tp.byte] = read_rxram_slot_u8(qspi, slot);
+ 			dev_dbg(&qspi->pdev->dev, "RD %02x\n",
+-				buf ? buf[tp.byte] : 0xff);
++				buf ? buf[tp.byte] : 0x0);
+ 		} else {
+ 			u16 *buf = tp.trans->rx_buf;
+ 
+@@ -678,7 +678,7 @@ static void read_from_hw(struct bcm_qspi *qspi, int slots)
+ 				buf[tp.byte / 2] = read_rxram_slot_u16(qspi,
+ 								      slot);
+ 			dev_dbg(&qspi->pdev->dev, "RD %04x\n",
+-				buf ? buf[tp.byte] : 0xffff);
++				buf ? buf[tp.byte / 2] : 0x0);
+ 		}
+ 
+ 		update_qspi_trans_byte_count(qspi, &tp,
+@@ -733,13 +733,13 @@ static int write_to_hw(struct bcm_qspi *qspi, struct spi_device *spi)
+ 	while (!tstatus && slot < MSPI_NUM_CDRAM) {
+ 		if (tp.trans->bits_per_word <= 8) {
+ 			const u8 *buf = tp.trans->tx_buf;
+-			u8 val = buf ? buf[tp.byte] : 0xff;
++			u8 val = buf ? buf[tp.byte] : 0x00;
+ 
+ 			write_txram_slot_u8(qspi, slot, val);
+ 			dev_dbg(&qspi->pdev->dev, "WR %02x\n", val);
+ 		} else {
+ 			const u16 *buf = tp.trans->tx_buf;
+-			u16 val = buf ? buf[tp.byte / 2] : 0xffff;
++			u16 val = buf ? buf[tp.byte / 2] : 0x0000;
+ 
+ 			write_txram_slot_u16(qspi, slot, val);
+ 			dev_dbg(&qspi->pdev->dev, "WR %04x\n", val);
+@@ -1222,6 +1222,11 @@ int bcm_qspi_probe(struct platform_device *pdev,
+ 	}
+ 
+ 	qspi = spi_master_get_devdata(master);
++
++	qspi->clk = devm_clk_get_optional(&pdev->dev, NULL);
++	if (IS_ERR(qspi->clk))
++		return PTR_ERR(qspi->clk);
++
+ 	qspi->pdev = pdev;
+ 	qspi->trans_pos.trans = NULL;
+ 	qspi->trans_pos.byte = 0;
+@@ -1335,13 +1340,6 @@ int bcm_qspi_probe(struct platform_device *pdev,
+ 		qspi->soc_intc = NULL;
+ 	}
+ 
+-	qspi->clk = devm_clk_get(&pdev->dev, NULL);
+-	if (IS_ERR(qspi->clk)) {
+-		dev_warn(dev, "unable to get clock\n");
+-		ret = PTR_ERR(qspi->clk);
+-		goto qspi_probe_err;
+-	}
+-
+ 	ret = clk_prepare_enable(qspi->clk);
+ 	if (ret) {
+ 		dev_err(dev, "failed to prepare clock\n");
+diff --git a/drivers/spi/spi-bcm2835.c b/drivers/spi/spi-bcm2835.c
+index 11c235879bb7..fd887a6492f4 100644
+--- a/drivers/spi/spi-bcm2835.c
++++ b/drivers/spi/spi-bcm2835.c
+@@ -1347,7 +1347,7 @@ static int bcm2835_spi_probe(struct platform_device *pdev)
+ 		goto out_dma_release;
+ 	}
+ 
+-	err = devm_spi_register_controller(&pdev->dev, ctlr);
++	err = spi_register_controller(ctlr);
+ 	if (err) {
+ 		dev_err(&pdev->dev, "could not register SPI controller: %d\n",
+ 			err);
+@@ -1374,6 +1374,8 @@ static int bcm2835_spi_remove(struct platform_device *pdev)
+ 
+ 	bcm2835_debugfs_remove(bs);
+ 
++	spi_unregister_controller(ctlr);
++
+ 	/* Clear FIFOs, and disable the HW block */
+ 	bcm2835_wr(bs, BCM2835_SPI_CS,
+ 		   BCM2835_SPI_CS_CLEAR_RX | BCM2835_SPI_CS_CLEAR_TX);
+diff --git a/drivers/spi/spi-bcm2835aux.c b/drivers/spi/spi-bcm2835aux.c
+index a2162ff56a12..c331efd6e86b 100644
+--- a/drivers/spi/spi-bcm2835aux.c
++++ b/drivers/spi/spi-bcm2835aux.c
+@@ -569,7 +569,7 @@ static int bcm2835aux_spi_probe(struct platform_device *pdev)
+ 		goto out_clk_disable;
+ 	}
+ 
+-	err = devm_spi_register_master(&pdev->dev, master);
++	err = spi_register_master(master);
+ 	if (err) {
+ 		dev_err(&pdev->dev, "could not register SPI master: %d\n", err);
+ 		goto out_clk_disable;
+@@ -593,6 +593,8 @@ static int bcm2835aux_spi_remove(struct platform_device *pdev)
+ 
+ 	bcm2835aux_debugfs_remove(bs);
+ 
++	spi_unregister_master(master);
++
+ 	bcm2835aux_spi_reset_hw(bs);
+ 
+ 	/* disable the HW block by releasing the clock */
+diff --git a/drivers/spi/spi-dw.c b/drivers/spi/spi-dw.c
+index 31e3f866d11a..dbf9b8d5cebe 100644
+--- a/drivers/spi/spi-dw.c
++++ b/drivers/spi/spi-dw.c
+@@ -128,12 +128,20 @@ void dw_spi_set_cs(struct spi_device *spi, bool enable)
+ {
+ 	struct dw_spi *dws = spi_controller_get_devdata(spi->controller);
+ 	struct chip_data *chip = spi_get_ctldata(spi);
++	bool cs_high = !!(spi->mode & SPI_CS_HIGH);
+ 
+ 	/* Chip select logic is inverted from spi_set_cs() */
+ 	if (chip && chip->cs_control)
+ 		chip->cs_control(!enable);
+ 
+-	if (!enable)
++	/*
++	 * DW SPI controller demands any native CS being set in order to
++	 * proceed with data transfer. So in order to activate the SPI
++	 * communications we must set a corresponding bit in the Slave
++	 * Enable register no matter whether the SPI core is configured to
++	 * support active-high or active-low CS level.
++	 */
++	if (cs_high == enable)
+ 		dw_writel(dws, DW_SPI_SER, BIT(spi->chip_select));
+ 	else if (dws->cs_override)
+ 		dw_writel(dws, DW_SPI_SER, 0);
+@@ -526,7 +534,7 @@ int dw_spi_add_host(struct device *dev, struct dw_spi *dws)
+ 		}
+ 	}
+ 
+-	ret = devm_spi_register_controller(dev, master);
++	ret = spi_register_controller(master);
+ 	if (ret) {
+ 		dev_err(&master->dev, "problem registering spi master\n");
+ 		goto err_dma_exit;
+@@ -550,6 +558,8 @@ void dw_spi_remove_host(struct dw_spi *dws)
+ {
+ 	dw_spi_debugfs_remove(dws);
+ 
++	spi_unregister_controller(dws->master);
++
+ 	if (dws->dma_ops && dws->dma_ops->dma_exit)
+ 		dws->dma_ops->dma_exit(dws);
+ 
+diff --git a/drivers/spi/spi-pxa2xx.c b/drivers/spi/spi-pxa2xx.c
+index 2e318158fca9..5f8eb2589595 100644
+--- a/drivers/spi/spi-pxa2xx.c
++++ b/drivers/spi/spi-pxa2xx.c
+@@ -1879,7 +1879,7 @@ static int pxa2xx_spi_probe(struct platform_device *pdev)
+ 
+ 	/* Register with the SPI framework */
+ 	platform_set_drvdata(pdev, drv_data);
+-	status = devm_spi_register_controller(&pdev->dev, controller);
++	status = spi_register_controller(controller);
+ 	if (status != 0) {
+ 		dev_err(&pdev->dev, "problem registering spi controller\n");
+ 		goto out_error_pm_runtime_enabled;
+@@ -1888,7 +1888,6 @@ static int pxa2xx_spi_probe(struct platform_device *pdev)
+ 	return status;
+ 
+ out_error_pm_runtime_enabled:
+-	pm_runtime_put_noidle(&pdev->dev);
+ 	pm_runtime_disable(&pdev->dev);
+ 
+ out_error_clock_enabled:
+@@ -1915,6 +1914,8 @@ static int pxa2xx_spi_remove(struct platform_device *pdev)
+ 
+ 	pm_runtime_get_sync(&pdev->dev);
+ 
++	spi_unregister_controller(drv_data->controller);
++
+ 	/* Disable the SSP at the peripheral and SOC level */
+ 	pxa2xx_spi_write(drv_data, SSCR0, 0);
+ 	clk_disable_unprepare(ssp->clk);
+diff --git a/drivers/spi/spi.c b/drivers/spi/spi.c
+index 755221bc3745..1fc29a665a4a 100644
+--- a/drivers/spi/spi.c
++++ b/drivers/spi/spi.c
+@@ -2768,6 +2768,8 @@ void spi_unregister_controller(struct spi_controller *ctlr)
+ 	struct spi_controller *found;
+ 	int id = ctlr->bus_num;
+ 
++	device_for_each_child(&ctlr->dev, NULL, __unregister);
++
+ 	/* First make sure that this controller was ever added */
+ 	mutex_lock(&board_lock);
+ 	found = idr_find(&spi_master_idr, id);
+@@ -2780,7 +2782,6 @@ void spi_unregister_controller(struct spi_controller *ctlr)
+ 	list_del(&ctlr->list);
+ 	mutex_unlock(&board_lock);
+ 
+-	device_for_each_child(&ctlr->dev, NULL, __unregister);
+ 	device_unregister(&ctlr->dev);
+ 	/* free bus id */
+ 	mutex_lock(&board_lock);
+diff --git a/drivers/staging/mt7621-pci/pci-mt7621.c b/drivers/staging/mt7621-pci/pci-mt7621.c
+index a1dafec0890a..6eb7436af462 100644
+--- a/drivers/staging/mt7621-pci/pci-mt7621.c
++++ b/drivers/staging/mt7621-pci/pci-mt7621.c
+@@ -479,17 +479,25 @@ static void mt7621_pcie_init_ports(struct mt7621_pcie *pcie)
+ 
+ 	mt7621_perst_gpio_pcie_deassert(pcie);
+ 
++	tmp = NULL;
+ 	list_for_each_entry(port, &pcie->ports, list) {
+ 		u32 slot = port->slot;
+ 
+ 		if (!mt7621_pcie_port_is_linkup(port)) {
+ 			dev_err(dev, "pcie%d no card, disable it (RST & CLK)\n",
+ 				slot);
+-			if (slot != 1)
+-				phy_power_off(port->phy);
+ 			mt7621_control_assert(port);
+ 			mt7621_pcie_port_clk_disable(port);
+ 			port->enabled = false;
++
++			if (slot == 0) {
++				tmp = port;
++				continue;
++			}
++
++			if (slot == 1 && tmp && !tmp->enabled)
++				phy_power_off(tmp->phy);
++
+ 		}
+ 	}
+ 
+diff --git a/drivers/staging/wfx/main.c b/drivers/staging/wfx/main.c
+index 76b2ff7fc7fe..2c757b81efa9 100644
+--- a/drivers/staging/wfx/main.c
++++ b/drivers/staging/wfx/main.c
+@@ -466,7 +466,6 @@ int wfx_probe(struct wfx_dev *wdev)
+ 
+ err2:
+ 	ieee80211_unregister_hw(wdev->hw);
+-	ieee80211_free_hw(wdev->hw);
+ err1:
+ 	wfx_bh_unregister(wdev);
+ 	return err;
+diff --git a/drivers/target/iscsi/iscsi_target.c b/drivers/target/iscsi/iscsi_target.c
+index 9fc7e374a29b..59379d662626 100644
+--- a/drivers/target/iscsi/iscsi_target.c
++++ b/drivers/target/iscsi/iscsi_target.c
+@@ -4301,30 +4301,37 @@ int iscsit_close_connection(
+ 	if (!atomic_read(&sess->session_reinstatement) &&
+ 	     atomic_read(&sess->session_fall_back_to_erl0)) {
+ 		spin_unlock_bh(&sess->conn_lock);
++		complete_all(&sess->session_wait_comp);
+ 		iscsit_close_session(sess);
+ 
+ 		return 0;
+ 	} else if (atomic_read(&sess->session_logout)) {
+ 		pr_debug("Moving to TARG_SESS_STATE_FREE.\n");
+ 		sess->session_state = TARG_SESS_STATE_FREE;
+-		spin_unlock_bh(&sess->conn_lock);
+ 
+-		if (atomic_read(&sess->sleep_on_sess_wait_comp))
+-			complete(&sess->session_wait_comp);
++		if (atomic_read(&sess->session_close)) {
++			spin_unlock_bh(&sess->conn_lock);
++			complete_all(&sess->session_wait_comp);
++			iscsit_close_session(sess);
++		} else {
++			spin_unlock_bh(&sess->conn_lock);
++		}
+ 
+ 		return 0;
+ 	} else {
+ 		pr_debug("Moving to TARG_SESS_STATE_FAILED.\n");
+ 		sess->session_state = TARG_SESS_STATE_FAILED;
+ 
+-		if (!atomic_read(&sess->session_continuation)) {
+-			spin_unlock_bh(&sess->conn_lock);
++		if (!atomic_read(&sess->session_continuation))
+ 			iscsit_start_time2retain_handler(sess);
+-		} else
+-			spin_unlock_bh(&sess->conn_lock);
+ 
+-		if (atomic_read(&sess->sleep_on_sess_wait_comp))
+-			complete(&sess->session_wait_comp);
++		if (atomic_read(&sess->session_close)) {
++			spin_unlock_bh(&sess->conn_lock);
++			complete_all(&sess->session_wait_comp);
++			iscsit_close_session(sess);
++		} else {
++			spin_unlock_bh(&sess->conn_lock);
++		}
+ 
+ 		return 0;
+ 	}
+@@ -4429,9 +4436,9 @@ static void iscsit_logout_post_handler_closesession(
+ 	complete(&conn->conn_logout_comp);
+ 
+ 	iscsit_dec_conn_usage_count(conn);
++	atomic_set(&sess->session_close, 1);
+ 	iscsit_stop_session(sess, sleep, sleep);
+ 	iscsit_dec_session_usage_count(sess);
+-	iscsit_close_session(sess);
+ }
+ 
+ static void iscsit_logout_post_handler_samecid(
+@@ -4566,49 +4573,6 @@ void iscsit_fail_session(struct iscsi_session *sess)
+ 	sess->session_state = TARG_SESS_STATE_FAILED;
+ }
+ 
+-int iscsit_free_session(struct iscsi_session *sess)
+-{
+-	u16 conn_count = atomic_read(&sess->nconn);
+-	struct iscsi_conn *conn, *conn_tmp = NULL;
+-	int is_last;
+-
+-	spin_lock_bh(&sess->conn_lock);
+-	atomic_set(&sess->sleep_on_sess_wait_comp, 1);
+-
+-	list_for_each_entry_safe(conn, conn_tmp, &sess->sess_conn_list,
+-			conn_list) {
+-		if (conn_count == 0)
+-			break;
+-
+-		if (list_is_last(&conn->conn_list, &sess->sess_conn_list)) {
+-			is_last = 1;
+-		} else {
+-			iscsit_inc_conn_usage_count(conn_tmp);
+-			is_last = 0;
+-		}
+-		iscsit_inc_conn_usage_count(conn);
+-
+-		spin_unlock_bh(&sess->conn_lock);
+-		iscsit_cause_connection_reinstatement(conn, 1);
+-		spin_lock_bh(&sess->conn_lock);
+-
+-		iscsit_dec_conn_usage_count(conn);
+-		if (is_last == 0)
+-			iscsit_dec_conn_usage_count(conn_tmp);
+-
+-		conn_count--;
+-	}
+-
+-	if (atomic_read(&sess->nconn)) {
+-		spin_unlock_bh(&sess->conn_lock);
+-		wait_for_completion(&sess->session_wait_comp);
+-	} else
+-		spin_unlock_bh(&sess->conn_lock);
+-
+-	iscsit_close_session(sess);
+-	return 0;
+-}
+-
+ void iscsit_stop_session(
+ 	struct iscsi_session *sess,
+ 	int session_sleep,
+@@ -4619,8 +4583,6 @@ void iscsit_stop_session(
+ 	int is_last;
+ 
+ 	spin_lock_bh(&sess->conn_lock);
+-	if (session_sleep)
+-		atomic_set(&sess->sleep_on_sess_wait_comp, 1);
+ 
+ 	if (connection_sleep) {
+ 		list_for_each_entry_safe(conn, conn_tmp, &sess->sess_conn_list,
+@@ -4678,12 +4640,15 @@ int iscsit_release_sessions_for_tpg(struct iscsi_portal_group *tpg, int force)
+ 		spin_lock(&sess->conn_lock);
+ 		if (atomic_read(&sess->session_fall_back_to_erl0) ||
+ 		    atomic_read(&sess->session_logout) ||
++		    atomic_read(&sess->session_close) ||
+ 		    (sess->time2retain_timer_flags & ISCSI_TF_EXPIRED)) {
+ 			spin_unlock(&sess->conn_lock);
+ 			continue;
+ 		}
++		iscsit_inc_session_usage_count(sess);
+ 		atomic_set(&sess->session_reinstatement, 1);
+ 		atomic_set(&sess->session_fall_back_to_erl0, 1);
++		atomic_set(&sess->session_close, 1);
+ 		spin_unlock(&sess->conn_lock);
+ 
+ 		list_move_tail(&se_sess->sess_list, &free_list);
+@@ -4693,7 +4658,9 @@ int iscsit_release_sessions_for_tpg(struct iscsi_portal_group *tpg, int force)
+ 	list_for_each_entry_safe(se_sess, se_sess_tmp, &free_list, sess_list) {
+ 		sess = (struct iscsi_session *)se_sess->fabric_sess_ptr;
+ 
+-		iscsit_free_session(sess);
++		list_del_init(&se_sess->sess_list);
++		iscsit_stop_session(sess, 1, 1);
++		iscsit_dec_session_usage_count(sess);
+ 		session_count++;
+ 	}
+ 
+diff --git a/drivers/target/iscsi/iscsi_target.h b/drivers/target/iscsi/iscsi_target.h
+index c95f56a3ce31..7409ce2a6607 100644
+--- a/drivers/target/iscsi/iscsi_target.h
++++ b/drivers/target/iscsi/iscsi_target.h
+@@ -43,7 +43,6 @@ extern int iscsi_target_rx_thread(void *);
+ extern int iscsit_close_connection(struct iscsi_conn *);
+ extern int iscsit_close_session(struct iscsi_session *);
+ extern void iscsit_fail_session(struct iscsi_session *);
+-extern int iscsit_free_session(struct iscsi_session *);
+ extern void iscsit_stop_session(struct iscsi_session *, int, int);
+ extern int iscsit_release_sessions_for_tpg(struct iscsi_portal_group *, int);
+ 
+diff --git a/drivers/target/iscsi/iscsi_target_configfs.c b/drivers/target/iscsi/iscsi_target_configfs.c
+index 42b369fc415e..0fa1d57b26fa 100644
+--- a/drivers/target/iscsi/iscsi_target_configfs.c
++++ b/drivers/target/iscsi/iscsi_target_configfs.c
+@@ -1476,20 +1476,23 @@ static void lio_tpg_close_session(struct se_session *se_sess)
+ 	spin_lock(&sess->conn_lock);
+ 	if (atomic_read(&sess->session_fall_back_to_erl0) ||
+ 	    atomic_read(&sess->session_logout) ||
++	    atomic_read(&sess->session_close) ||
+ 	    (sess->time2retain_timer_flags & ISCSI_TF_EXPIRED)) {
+ 		spin_unlock(&sess->conn_lock);
+ 		spin_unlock_bh(&se_tpg->session_lock);
+ 		return;
+ 	}
++	iscsit_inc_session_usage_count(sess);
+ 	atomic_set(&sess->session_reinstatement, 1);
+ 	atomic_set(&sess->session_fall_back_to_erl0, 1);
++	atomic_set(&sess->session_close, 1);
+ 	spin_unlock(&sess->conn_lock);
+ 
+ 	iscsit_stop_time2retain_timer(sess);
+ 	spin_unlock_bh(&se_tpg->session_lock);
+ 
+ 	iscsit_stop_session(sess, 1, 1);
+-	iscsit_close_session(sess);
++	iscsit_dec_session_usage_count(sess);
+ }
+ 
+ static u32 lio_tpg_get_inst_index(struct se_portal_group *se_tpg)
+diff --git a/drivers/target/iscsi/iscsi_target_login.c b/drivers/target/iscsi/iscsi_target_login.c
+index f53330813207..731ee67fe914 100644
+--- a/drivers/target/iscsi/iscsi_target_login.c
++++ b/drivers/target/iscsi/iscsi_target_login.c
+@@ -156,6 +156,7 @@ int iscsi_check_for_session_reinstatement(struct iscsi_conn *conn)
+ 		spin_lock(&sess_p->conn_lock);
+ 		if (atomic_read(&sess_p->session_fall_back_to_erl0) ||
+ 		    atomic_read(&sess_p->session_logout) ||
++		    atomic_read(&sess_p->session_close) ||
+ 		    (sess_p->time2retain_timer_flags & ISCSI_TF_EXPIRED)) {
+ 			spin_unlock(&sess_p->conn_lock);
+ 			continue;
+@@ -166,6 +167,7 @@ int iscsi_check_for_session_reinstatement(struct iscsi_conn *conn)
+ 		   (sess_p->sess_ops->SessionType == sessiontype))) {
+ 			atomic_set(&sess_p->session_reinstatement, 1);
+ 			atomic_set(&sess_p->session_fall_back_to_erl0, 1);
++			atomic_set(&sess_p->session_close, 1);
+ 			spin_unlock(&sess_p->conn_lock);
+ 			iscsit_inc_session_usage_count(sess_p);
+ 			iscsit_stop_time2retain_timer(sess_p);
+@@ -190,7 +192,6 @@ int iscsi_check_for_session_reinstatement(struct iscsi_conn *conn)
+ 	if (sess->session_state == TARG_SESS_STATE_FAILED) {
+ 		spin_unlock_bh(&sess->conn_lock);
+ 		iscsit_dec_session_usage_count(sess);
+-		iscsit_close_session(sess);
+ 		return 0;
+ 	}
+ 	spin_unlock_bh(&sess->conn_lock);
+@@ -198,7 +199,6 @@ int iscsi_check_for_session_reinstatement(struct iscsi_conn *conn)
+ 	iscsit_stop_session(sess, 1, 1);
+ 	iscsit_dec_session_usage_count(sess);
+ 
+-	iscsit_close_session(sess);
+ 	return 0;
+ }
+ 
+@@ -486,6 +486,7 @@ static int iscsi_login_non_zero_tsih_s2(
+ 		sess_p = (struct iscsi_session *)se_sess->fabric_sess_ptr;
+ 		if (atomic_read(&sess_p->session_fall_back_to_erl0) ||
+ 		    atomic_read(&sess_p->session_logout) ||
++		    atomic_read(&sess_p->session_close) ||
+ 		   (sess_p->time2retain_timer_flags & ISCSI_TF_EXPIRED))
+ 			continue;
+ 		if (!memcmp(sess_p->isid, pdu->isid, 6) &&
+diff --git a/drivers/video/fbdev/vt8500lcdfb.c b/drivers/video/fbdev/vt8500lcdfb.c
+index f744479dc7df..c61476247ba8 100644
+--- a/drivers/video/fbdev/vt8500lcdfb.c
++++ b/drivers/video/fbdev/vt8500lcdfb.c
+@@ -230,6 +230,7 @@ static int vt8500lcd_blank(int blank, struct fb_info *info)
+ 		    info->fix.visual == FB_VISUAL_STATIC_PSEUDOCOLOR)
+ 			for (i = 0; i < 256; i++)
+ 				vt8500lcd_setcolreg(i, 0, 0, 0, 0, info);
++		fallthrough;
+ 	case FB_BLANK_UNBLANK:
+ 		if (info->fix.visual == FB_VISUAL_PSEUDOCOLOR ||
+ 		    info->fix.visual == FB_VISUAL_STATIC_PSEUDOCOLOR)
+diff --git a/drivers/video/fbdev/w100fb.c b/drivers/video/fbdev/w100fb.c
+index ad26cbffbc6f..0c2c0963aeb8 100644
+--- a/drivers/video/fbdev/w100fb.c
++++ b/drivers/video/fbdev/w100fb.c
+@@ -588,6 +588,7 @@ static void w100fb_restore_vidmem(struct w100fb_par *par)
+ 		memsize=par->mach->mem->size;
+ 		memcpy_toio(remapped_fbuf + (W100_FB_BASE-MEM_WINDOW_BASE), par->saved_extmem, memsize);
+ 		vfree(par->saved_extmem);
++		par->saved_extmem = NULL;
+ 	}
+ 	if (par->saved_intmem) {
+ 		memsize=MEM_INT_SIZE;
+@@ -596,6 +597,7 @@ static void w100fb_restore_vidmem(struct w100fb_par *par)
+ 		else
+ 			memcpy_toio(remapped_fbuf + (W100_FB_BASE-MEM_WINDOW_BASE), par->saved_intmem, memsize);
+ 		vfree(par->saved_intmem);
++		par->saved_intmem = NULL;
+ 	}
+ }
+ 
+diff --git a/drivers/watchdog/imx_sc_wdt.c b/drivers/watchdog/imx_sc_wdt.c
+index 8ed89f032ebf..e0e62149a6f4 100644
+--- a/drivers/watchdog/imx_sc_wdt.c
++++ b/drivers/watchdog/imx_sc_wdt.c
+@@ -177,6 +177,11 @@ static int imx_sc_wdt_probe(struct platform_device *pdev)
+ 	wdog->timeout = DEFAULT_TIMEOUT;
+ 
+ 	watchdog_init_timeout(wdog, 0, dev);
++
++	ret = imx_sc_wdt_set_timeout(wdog, wdog->timeout);
++	if (ret)
++		return ret;
++
+ 	watchdog_stop_on_reboot(wdog);
+ 	watchdog_stop_on_unregister(wdog);
+ 
+diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
+index c57c71b7d53d..ffe9bd843922 100644
+--- a/drivers/xen/pvcalls-back.c
++++ b/drivers/xen/pvcalls-back.c
+@@ -1087,7 +1087,8 @@ static void set_backend_state(struct xenbus_device *dev,
+ 		case XenbusStateInitialised:
+ 			switch (state) {
+ 			case XenbusStateConnected:
+-				backend_connect(dev);
++				if (backend_connect(dev))
++					return;
+ 				xenbus_switch_state(dev, XenbusStateConnected);
+ 				break;
+ 			case XenbusStateClosing:
+diff --git a/fs/aio.c b/fs/aio.c
+index 5f3d3d814928..6483f9274d5e 100644
+--- a/fs/aio.c
++++ b/fs/aio.c
+@@ -176,6 +176,7 @@ struct fsync_iocb {
+ 	struct file		*file;
+ 	struct work_struct	work;
+ 	bool			datasync;
++	struct cred		*creds;
+ };
+ 
+ struct poll_iocb {
+@@ -1589,8 +1590,11 @@ static int aio_write(struct kiocb *req, const struct iocb *iocb,
+ static void aio_fsync_work(struct work_struct *work)
+ {
+ 	struct aio_kiocb *iocb = container_of(work, struct aio_kiocb, fsync.work);
++	const struct cred *old_cred = override_creds(iocb->fsync.creds);
+ 
+ 	iocb->ki_res.res = vfs_fsync(iocb->fsync.file, iocb->fsync.datasync);
++	revert_creds(old_cred);
++	put_cred(iocb->fsync.creds);
+ 	iocb_put(iocb);
+ }
+ 
+@@ -1604,6 +1608,10 @@ static int aio_fsync(struct fsync_iocb *req, const struct iocb *iocb,
+ 	if (unlikely(!req->file->f_op->fsync))
+ 		return -EINVAL;
+ 
++	req->creds = prepare_creds();
++	if (!req->creds)
++		return -ENOMEM;
++
+ 	req->datasync = datasync;
+ 	INIT_WORK(&req->work, aio_fsync_work);
+ 	schedule_work(&req->work);
+diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c
+index fa77fe5258b0..82d5ea522c33 100644
+--- a/fs/cifs/cifsfs.c
++++ b/fs/cifs/cifsfs.c
+@@ -621,7 +621,7 @@ cifs_show_options(struct seq_file *s, struct dentry *root)
+ 	seq_printf(s, ",actimeo=%lu", cifs_sb->actimeo / HZ);
+ 
+ 	if (tcon->ses->chan_max > 1)
+-		seq_printf(s, ",multichannel,max_channel=%zu",
++		seq_printf(s, ",multichannel,max_channels=%zu",
+ 			   tcon->ses->chan_max);
+ 
+ 	return 0;
+diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
+index 28c0be5e69b7..d9160eaa9e32 100644
+--- a/fs/cifs/smb2pdu.c
++++ b/fs/cifs/smb2pdu.c
+@@ -2868,7 +2868,9 @@ SMB2_ioctl_init(struct cifs_tcon *tcon, struct smb_rqst *rqst,
+ 	 * response size smaller.
+ 	 */
+ 	req->MaxOutputResponse = cpu_to_le32(max_response_size);
+-
++	req->sync_hdr.CreditCharge =
++		cpu_to_le16(DIV_ROUND_UP(max(indatalen, max_response_size),
++					 SMB2_MAX_BUFFER_SIZE));
+ 	if (is_fsctl)
+ 		req->Flags = cpu_to_le32(SMB2_0_IOCTL_IS_FSCTL);
+ 	else
+diff --git a/fs/fat/inode.c b/fs/fat/inode.c
+index 71946da84388..bf8e04e25f35 100644
+--- a/fs/fat/inode.c
++++ b/fs/fat/inode.c
+@@ -1520,6 +1520,12 @@ static int fat_read_bpb(struct super_block *sb, struct fat_boot_sector *b,
+ 		goto out;
+ 	}
+ 
++	if (bpb->fat_fat_length == 0 && bpb->fat32_length == 0) {
++		if (!silent)
++			fat_msg(sb, KERN_ERR, "bogus number of FAT sectors");
++		goto out;
++	}
++
+ 	error = 0;
+ 
+ out:
+diff --git a/fs/gfs2/lops.c b/fs/gfs2/lops.c
+index 3a020bdc358c..966ed37c9acd 100644
+--- a/fs/gfs2/lops.c
++++ b/fs/gfs2/lops.c
+@@ -505,12 +505,12 @@ int gfs2_find_jhead(struct gfs2_jdesc *jd, struct gfs2_log_header_host *head,
+ 	unsigned int bsize = sdp->sd_sb.sb_bsize, off;
+ 	unsigned int bsize_shift = sdp->sd_sb.sb_bsize_shift;
+ 	unsigned int shift = PAGE_SHIFT - bsize_shift;
+-	unsigned int max_bio_size = 2 * 1024 * 1024;
++	unsigned int max_blocks = 2 * 1024 * 1024 >> bsize_shift;
+ 	struct gfs2_journal_extent *je;
+ 	int sz, ret = 0;
+ 	struct bio *bio = NULL;
+ 	struct page *page = NULL;
+-	bool bio_chained = false, done = false;
++	bool done = false;
+ 	errseq_t since;
+ 
+ 	memset(head, 0, sizeof(*head));
+@@ -533,10 +533,7 @@ int gfs2_find_jhead(struct gfs2_jdesc *jd, struct gfs2_log_header_host *head,
+ 				off = 0;
+ 			}
+ 
+-			if (!bio || (bio_chained && !off) ||
+-			    bio->bi_iter.bi_size >= max_bio_size) {
+-				/* start new bio */
+-			} else {
++			if (bio && (off || block < blocks_submitted + max_blocks)) {
+ 				sector_t sector = dblock << sdp->sd_fsb2bb_shift;
+ 
+ 				if (bio_end_sector(bio) == sector) {
+@@ -549,19 +546,17 @@ int gfs2_find_jhead(struct gfs2_jdesc *jd, struct gfs2_log_header_host *head,
+ 						(PAGE_SIZE - off) >> bsize_shift;
+ 
+ 					bio = gfs2_chain_bio(bio, blocks);
+-					bio_chained = true;
+ 					goto add_block_to_new_bio;
+ 				}
+ 			}
+ 
+ 			if (bio) {
+-				blocks_submitted = block + 1;
++				blocks_submitted = block;
+ 				submit_bio(bio);
+ 			}
+ 
+ 			bio = gfs2_log_alloc_bio(sdp, dblock, gfs2_end_log_read);
+ 			bio->bi_opf = REQ_OP_READ;
+-			bio_chained = false;
+ add_block_to_new_bio:
+ 			sz = bio_add_page(bio, page, bsize, off);
+ 			BUG_ON(sz != bsize);
+@@ -569,7 +564,7 @@ block_added:
+ 			off += bsize;
+ 			if (off == PAGE_SIZE)
+ 				page = NULL;
+-			if (blocks_submitted < 2 * max_bio_size >> bsize_shift) {
++			if (blocks_submitted <= blocks_read + max_blocks) {
+ 				/* Keep at least one bio in flight */
+ 				continue;
+ 			}
+diff --git a/fs/io_uring.c b/fs/io_uring.c
+index c6e1f76a6ee0..8276c3c42894 100644
+--- a/fs/io_uring.c
++++ b/fs/io_uring.c
+@@ -6254,8 +6254,8 @@ static int io_sqe_buffer_register(struct io_ring_ctx *ctx, void __user *arg,
+ 
+ 		ret = 0;
+ 		if (!pages || nr_pages > got_pages) {
+-			kfree(vmas);
+-			kfree(pages);
++			kvfree(vmas);
++			kvfree(pages);
+ 			pages = kvmalloc_array(nr_pages, sizeof(struct page *),
+ 						GFP_KERNEL);
+ 			vmas = kvmalloc_array(nr_pages,
+@@ -6488,11 +6488,9 @@ static int io_uring_release(struct inode *inode, struct file *file)
+ static void io_uring_cancel_files(struct io_ring_ctx *ctx,
+ 				  struct files_struct *files)
+ {
+-	struct io_kiocb *req;
+-	DEFINE_WAIT(wait);
+-
+ 	while (!list_empty_careful(&ctx->inflight_list)) {
+-		struct io_kiocb *cancel_req = NULL;
++		struct io_kiocb *cancel_req = NULL, *req;
++		DEFINE_WAIT(wait);
+ 
+ 		spin_lock_irq(&ctx->inflight_lock);
+ 		list_for_each_entry(req, &ctx->inflight_list, inflight_entry) {
+@@ -6531,7 +6529,8 @@ static void io_uring_cancel_files(struct io_ring_ctx *ctx,
+ 			 * all we had, then we're done with this request.
+ 			 */
+ 			if (refcount_sub_and_test(2, &cancel_req->refs)) {
+-				io_put_req(cancel_req);
++				io_free_req(cancel_req);
++				finish_wait(&ctx->inflight_wait, &wait);
+ 				continue;
+ 			}
+ 		}
+@@ -6539,8 +6538,8 @@ static void io_uring_cancel_files(struct io_ring_ctx *ctx,
+ 		io_wq_cancel_work(ctx->io_wq, &cancel_req->work);
+ 		io_put_req(cancel_req);
+ 		schedule();
++		finish_wait(&ctx->inflight_wait, &wait);
+ 	}
+-	finish_wait(&ctx->inflight_wait, &wait);
+ }
+ 
+ static int io_uring_flush(struct file *file, void *data)
+diff --git a/fs/nilfs2/segment.c b/fs/nilfs2/segment.c
+index 445eef41bfaf..91b58c897f92 100644
+--- a/fs/nilfs2/segment.c
++++ b/fs/nilfs2/segment.c
+@@ -2780,6 +2780,8 @@ int nilfs_attach_log_writer(struct super_block *sb, struct nilfs_root *root)
+ 	if (!nilfs->ns_writer)
+ 		return -ENOMEM;
+ 
++	inode_attach_wb(nilfs->ns_bdev->bd_inode, NULL);
++
+ 	err = nilfs_segctor_start_thread(nilfs->ns_writer);
+ 	if (err) {
+ 		kfree(nilfs->ns_writer);
+diff --git a/fs/notify/fanotify/fanotify.c b/fs/notify/fanotify/fanotify.c
+index deb13f0a0f7d..d24548ed31b9 100644
+--- a/fs/notify/fanotify/fanotify.c
++++ b/fs/notify/fanotify/fanotify.c
+@@ -171,6 +171,10 @@ static u32 fanotify_group_event_mask(struct fsnotify_group *group,
+ 		if (!fsnotify_iter_should_report_type(iter_info, type))
+ 			continue;
+ 		mark = iter_info->marks[type];
++
++		/* Apply ignore mask regardless of ISDIR and ON_CHILD flags */
++		marks_ignored_mask |= mark->ignored_mask;
++
+ 		/*
+ 		 * If the event is on dir and this mark doesn't care about
+ 		 * events on dir, don't send it!
+@@ -188,7 +192,6 @@ static u32 fanotify_group_event_mask(struct fsnotify_group *group,
+ 			continue;
+ 
+ 		marks_mask |= mark->mask;
+-		marks_ignored_mask |= mark->ignored_mask;
+ 	}
+ 
+ 	test_mask = event_mask & marks_mask & ~marks_ignored_mask;
+diff --git a/fs/overlayfs/copy_up.c b/fs/overlayfs/copy_up.c
+index 9fc47c2e078d..3190dac8f330 100644
+--- a/fs/overlayfs/copy_up.c
++++ b/fs/overlayfs/copy_up.c
+@@ -40,7 +40,7 @@ int ovl_copy_xattr(struct dentry *old, struct dentry *new)
+ {
+ 	ssize_t list_size, size, value_size = 0;
+ 	char *buf, *name, *value = NULL;
+-	int uninitialized_var(error);
++	int error = 0;
+ 	size_t slen;
+ 
+ 	if (!(old->d_inode->i_opflags & IOP_XATTR) ||
+diff --git a/fs/overlayfs/overlayfs.h b/fs/overlayfs/overlayfs.h
+index 3d3f2b8bdae5..c2424330209a 100644
+--- a/fs/overlayfs/overlayfs.h
++++ b/fs/overlayfs/overlayfs.h
+@@ -339,6 +339,9 @@ int ovl_check_fb_len(struct ovl_fb *fb, int fb_len);
+ 
+ static inline int ovl_check_fh_len(struct ovl_fh *fh, int fh_len)
+ {
++	if (fh_len < sizeof(struct ovl_fh))
++		return -EINVAL;
++
+ 	return ovl_check_fb_len(&fh->fb, fh_len - OVL_FH_WIRE_OFFSET);
+ }
+ 
+diff --git a/fs/proc/inode.c b/fs/proc/inode.c
+index 6da18316d209..36b6819f12fe 100644
+--- a/fs/proc/inode.c
++++ b/fs/proc/inode.c
+@@ -448,7 +448,7 @@ const struct inode_operations proc_link_inode_operations = {
+ 
+ struct inode *proc_get_inode(struct super_block *sb, struct proc_dir_entry *de)
+ {
+-	struct inode *inode = new_inode_pseudo(sb);
++	struct inode *inode = new_inode(sb);
+ 
+ 	if (inode) {
+ 		inode->i_ino = de->low_ino;
+diff --git a/fs/proc/self.c b/fs/proc/self.c
+index 57c0a1047250..32af065397f8 100644
+--- a/fs/proc/self.c
++++ b/fs/proc/self.c
+@@ -43,7 +43,7 @@ int proc_setup_self(struct super_block *s)
+ 	inode_lock(root_inode);
+ 	self = d_alloc_name(s->s_root, "self");
+ 	if (self) {
+-		struct inode *inode = new_inode_pseudo(s);
++		struct inode *inode = new_inode(s);
+ 		if (inode) {
+ 			inode->i_ino = self_inum;
+ 			inode->i_mtime = inode->i_atime = inode->i_ctime = current_time(inode);
+diff --git a/fs/proc/thread_self.c b/fs/proc/thread_self.c
+index f61ae53533f5..fac9e50b33a6 100644
+--- a/fs/proc/thread_self.c
++++ b/fs/proc/thread_self.c
+@@ -43,7 +43,7 @@ int proc_setup_thread_self(struct super_block *s)
+ 	inode_lock(root_inode);
+ 	thread_self = d_alloc_name(s->s_root, "thread-self");
+ 	if (thread_self) {
+-		struct inode *inode = new_inode_pseudo(s);
++		struct inode *inode = new_inode(s);
+ 		if (inode) {
+ 			inode->i_ino = thread_self_inum;
+ 			inode->i_mtime = inode->i_atime = inode->i_ctime = current_time(inode);
+diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
+index e00f41aa8ec4..39da8d8b561d 100644
+--- a/include/asm-generic/vmlinux.lds.h
++++ b/include/asm-generic/vmlinux.lds.h
+@@ -535,6 +535,7 @@
+ 									\
+ 	RO_EXCEPTION_TABLE						\
+ 	NOTES								\
++	BTF								\
+ 									\
+ 	. = ALIGN((align));						\
+ 	__end_rodata = .;
+@@ -621,6 +622,20 @@
+ 		__stop___ex_table = .;					\
+ 	}
+ 
++/*
++ * .BTF
++ */
++#ifdef CONFIG_DEBUG_INFO_BTF
++#define BTF								\
++	.BTF : AT(ADDR(.BTF) - LOAD_OFFSET) {				\
++		__start_BTF = .;					\
++		*(.BTF)							\
++		__stop_BTF = .;						\
++	}
++#else
++#define BTF
++#endif
++
+ /*
+  * Init task
+  */
+diff --git a/include/linux/elfnote.h b/include/linux/elfnote.h
+index f236f5b931b2..7fdd7f355b52 100644
+--- a/include/linux/elfnote.h
++++ b/include/linux/elfnote.h
+@@ -54,7 +54,7 @@
+ .popsection				;
+ 
+ #define ELFNOTE(name, type, desc)		\
+-	ELFNOTE_START(name, type, "")		\
++	ELFNOTE_START(name, type, "a")		\
+ 		desc			;	\
+ 	ELFNOTE_END
+ 
+diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
+index b2a7159f66da..67b65176b5f2 100644
+--- a/include/linux/kvm_host.h
++++ b/include/linux/kvm_host.h
+@@ -1394,8 +1394,8 @@ static inline long kvm_arch_vcpu_async_ioctl(struct file *filp,
+ }
+ #endif /* CONFIG_HAVE_KVM_VCPU_ASYNC_IOCTL */
+ 
+-int kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm,
+-		unsigned long start, unsigned long end, bool blockable);
++void kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm,
++					    unsigned long start, unsigned long end);
+ 
+ #ifdef CONFIG_HAVE_KVM_VCPU_RUN_PID_CHANGE
+ int kvm_arch_vcpu_run_pid_change(struct kvm_vcpu *vcpu);
+diff --git a/include/linux/mm.h b/include/linux/mm.h
+index 96deeecd9179..9b9f48489576 100644
+--- a/include/linux/mm.h
++++ b/include/linux/mm.h
+@@ -669,6 +669,7 @@ static inline void *kvcalloc(size_t n, size_t size, gfp_t flags)
+ }
+ 
+ extern void kvfree(const void *addr);
++extern void kvfree_sensitive(const void *addr, size_t len);
+ 
+ /*
+  * Mapcount of compound page as a whole, does not include mapped sub-pages.
+diff --git a/include/linux/padata.h b/include/linux/padata.h
+index a0d8b41850b2..693cae9bfe66 100644
+--- a/include/linux/padata.h
++++ b/include/linux/padata.h
+@@ -139,7 +139,8 @@ struct padata_shell {
+ /**
+  * struct padata_instance - The overall control structure.
+  *
+- * @node: Used by CPU hotplug.
++ * @cpu_online_node: Linkage for CPU online callback.
++ * @cpu_dead_node: Linkage for CPU offline callback.
+  * @parallel_wq: The workqueue used for parallel work.
+  * @serial_wq: The workqueue used for serial work.
+  * @pslist: List of padata_shell objects attached to this instance.
+@@ -150,7 +151,8 @@ struct padata_shell {
+  * @flags: padata flags.
+  */
+ struct padata_instance {
+-	struct hlist_node		 node;
++	struct hlist_node		cpu_online_node;
++	struct hlist_node		cpu_dead_node;
+ 	struct workqueue_struct		*parallel_wq;
+ 	struct workqueue_struct		*serial_wq;
+ 	struct list_head		pslist;
+diff --git a/include/linux/ptdump.h b/include/linux/ptdump.h
+index a67065c403c3..ac01502763bf 100644
+--- a/include/linux/ptdump.h
++++ b/include/linux/ptdump.h
+@@ -14,6 +14,7 @@ struct ptdump_state {
+ 	/* level is 0:PGD to 4:PTE, or -1 if unknown */
+ 	void (*note_page)(struct ptdump_state *st, unsigned long addr,
+ 			  int level, unsigned long val);
++	void (*effective_prot)(struct ptdump_state *st, int level, u64 val);
+ 	const struct ptdump_range *range;
+ };
+ 
+diff --git a/include/linux/set_memory.h b/include/linux/set_memory.h
+index 86281ac7c305..860e0f843c12 100644
+--- a/include/linux/set_memory.h
++++ b/include/linux/set_memory.h
+@@ -26,7 +26,7 @@ static inline int set_direct_map_default_noflush(struct page *page)
+ #endif
+ 
+ #ifndef set_mce_nospec
+-static inline int set_mce_nospec(unsigned long pfn)
++static inline int set_mce_nospec(unsigned long pfn, bool unmap)
+ {
+ 	return 0;
+ }
+diff --git a/include/media/videobuf2-dma-contig.h b/include/media/videobuf2-dma-contig.h
+index 5604818d137e..5be313cbf7d7 100644
+--- a/include/media/videobuf2-dma-contig.h
++++ b/include/media/videobuf2-dma-contig.h
+@@ -25,7 +25,7 @@ vb2_dma_contig_plane_dma_addr(struct vb2_buffer *vb, unsigned int plane_no)
+ }
+ 
+ int vb2_dma_contig_set_max_seg_size(struct device *dev, unsigned int size);
+-void vb2_dma_contig_clear_max_seg_size(struct device *dev);
++static inline void vb2_dma_contig_clear_max_seg_size(struct device *dev) { }
+ 
+ extern const struct vb2_mem_ops vb2_dma_contig_memops;
+ 
+diff --git a/include/net/inet_hashtables.h b/include/net/inet_hashtables.h
+index d0019d3395cf..59802eb8d2cc 100644
+--- a/include/net/inet_hashtables.h
++++ b/include/net/inet_hashtables.h
+@@ -185,6 +185,12 @@ static inline spinlock_t *inet_ehash_lockp(
+ 
+ int inet_ehash_locks_alloc(struct inet_hashinfo *hashinfo);
+ 
++static inline void inet_hashinfo2_free_mod(struct inet_hashinfo *h)
++{
++	kfree(h->lhash2);
++	h->lhash2 = NULL;
++}
++
+ static inline void inet_ehash_locks_free(struct inet_hashinfo *hashinfo)
+ {
+ 	kvfree(hashinfo->ehash_locks);
+diff --git a/include/target/iscsi/iscsi_target_core.h b/include/target/iscsi/iscsi_target_core.h
+index a49d37140a64..591cd9e4692c 100644
+--- a/include/target/iscsi/iscsi_target_core.h
++++ b/include/target/iscsi/iscsi_target_core.h
+@@ -676,7 +676,7 @@ struct iscsi_session {
+ 	atomic_t		session_logout;
+ 	atomic_t		session_reinstatement;
+ 	atomic_t		session_stop_active;
+-	atomic_t		sleep_on_sess_wait_comp;
++	atomic_t		session_close;
+ 	/* connection list */
+ 	struct list_head	sess_conn_list;
+ 	struct list_head	cr_active_list;
+diff --git a/kernel/bpf/btf.c b/kernel/bpf/btf.c
+index 7787bdcb5d68..ff04f60c78d1 100644
+--- a/kernel/bpf/btf.c
++++ b/kernel/bpf/btf.c
+@@ -3477,8 +3477,8 @@ errout:
+ 	return ERR_PTR(err);
+ }
+ 
+-extern char __weak _binary__btf_vmlinux_bin_start[];
+-extern char __weak _binary__btf_vmlinux_bin_end[];
++extern char __weak __start_BTF[];
++extern char __weak __stop_BTF[];
+ extern struct btf *btf_vmlinux;
+ 
+ #define BPF_MAP_TYPE(_id, _ops)
+@@ -3605,9 +3605,8 @@ struct btf *btf_parse_vmlinux(void)
+ 	}
+ 	env->btf = btf;
+ 
+-	btf->data = _binary__btf_vmlinux_bin_start;
+-	btf->data_size = _binary__btf_vmlinux_bin_end -
+-		_binary__btf_vmlinux_bin_start;
++	btf->data = __start_BTF;
++	btf->data_size = __stop_BTF - __start_BTF;
+ 
+ 	err = btf_parse_hdr(env);
+ 	if (err)
+diff --git a/kernel/bpf/sysfs_btf.c b/kernel/bpf/sysfs_btf.c
+index 7ae5dddd1fe6..3b495773de5a 100644
+--- a/kernel/bpf/sysfs_btf.c
++++ b/kernel/bpf/sysfs_btf.c
+@@ -9,15 +9,15 @@
+ #include <linux/sysfs.h>
+ 
+ /* See scripts/link-vmlinux.sh, gen_btf() func for details */
+-extern char __weak _binary__btf_vmlinux_bin_start[];
+-extern char __weak _binary__btf_vmlinux_bin_end[];
++extern char __weak __start_BTF[];
++extern char __weak __stop_BTF[];
+ 
+ static ssize_t
+ btf_vmlinux_read(struct file *file, struct kobject *kobj,
+ 		 struct bin_attribute *bin_attr,
+ 		 char *buf, loff_t off, size_t len)
+ {
+-	memcpy(buf, _binary__btf_vmlinux_bin_start + off, len);
++	memcpy(buf, __start_BTF + off, len);
+ 	return len;
+ }
+ 
+@@ -30,15 +30,14 @@ static struct kobject *btf_kobj;
+ 
+ static int __init btf_vmlinux_init(void)
+ {
+-	if (!_binary__btf_vmlinux_bin_start)
++	if (!__start_BTF)
+ 		return 0;
+ 
+ 	btf_kobj = kobject_create_and_add("btf", kernel_kobj);
+ 	if (!btf_kobj)
+ 		return -ENOMEM;
+ 
+-	bin_attr_btf_vmlinux.size = _binary__btf_vmlinux_bin_end -
+-				    _binary__btf_vmlinux_bin_start;
++	bin_attr_btf_vmlinux.size = __stop_BTF - __start_BTF;
+ 
+ 	return sysfs_create_bin_file(btf_kobj, &bin_attr_btf_vmlinux);
+ }
+diff --git a/kernel/events/core.c b/kernel/events/core.c
+index 29ace472f916..ce9fd7605190 100644
+--- a/kernel/events/core.c
++++ b/kernel/events/core.c
+@@ -93,11 +93,11 @@ static void remote_function(void *data)
+  * @info:	the function call argument
+  *
+  * Calls the function @func when the task is currently running. This might
+- * be on the current CPU, which just calls the function directly
++ * be on the current CPU, which just calls the function directly.  This will
++ * retry due to any failures in smp_call_function_single(), such as if the
++ * task_cpu() goes offline concurrently.
+  *
+- * returns: @func return value, or
+- *	    -ESRCH  - when the process isn't running
+- *	    -EAGAIN - when the process moved away
++ * returns @func return value or -ESRCH when the process isn't running
+  */
+ static int
+ task_function_call(struct task_struct *p, remote_function_f func, void *info)
+@@ -110,11 +110,16 @@ task_function_call(struct task_struct *p, remote_function_f func, void *info)
+ 	};
+ 	int ret;
+ 
+-	do {
+-		ret = smp_call_function_single(task_cpu(p), remote_function, &data, 1);
+-		if (!ret)
+-			ret = data.ret;
+-	} while (ret == -EAGAIN);
++	for (;;) {
++		ret = smp_call_function_single(task_cpu(p), remote_function,
++					       &data, 1);
++		ret = !ret ? data.ret : -EAGAIN;
++
++		if (ret != -EAGAIN)
++			break;
++
++		cond_resched();
++	}
+ 
+ 	return ret;
+ }
+diff --git a/kernel/padata.c b/kernel/padata.c
+index 62082597d4a2..fee14ae90d96 100644
+--- a/kernel/padata.c
++++ b/kernel/padata.c
+@@ -703,7 +703,7 @@ static int padata_cpu_online(unsigned int cpu, struct hlist_node *node)
+ 	struct padata_instance *pinst;
+ 	int ret;
+ 
+-	pinst = hlist_entry_safe(node, struct padata_instance, node);
++	pinst = hlist_entry_safe(node, struct padata_instance, cpu_online_node);
+ 	if (!pinst_has_cpu(pinst, cpu))
+ 		return 0;
+ 
+@@ -718,7 +718,7 @@ static int padata_cpu_dead(unsigned int cpu, struct hlist_node *node)
+ 	struct padata_instance *pinst;
+ 	int ret;
+ 
+-	pinst = hlist_entry_safe(node, struct padata_instance, node);
++	pinst = hlist_entry_safe(node, struct padata_instance, cpu_dead_node);
+ 	if (!pinst_has_cpu(pinst, cpu))
+ 		return 0;
+ 
+@@ -734,8 +734,9 @@ static enum cpuhp_state hp_online;
+ static void __padata_free(struct padata_instance *pinst)
+ {
+ #ifdef CONFIG_HOTPLUG_CPU
+-	cpuhp_state_remove_instance_nocalls(CPUHP_PADATA_DEAD, &pinst->node);
+-	cpuhp_state_remove_instance_nocalls(hp_online, &pinst->node);
++	cpuhp_state_remove_instance_nocalls(CPUHP_PADATA_DEAD,
++					    &pinst->cpu_dead_node);
++	cpuhp_state_remove_instance_nocalls(hp_online, &pinst->cpu_online_node);
+ #endif
+ 
+ 	WARN_ON(!list_empty(&pinst->pslist));
+@@ -939,9 +940,10 @@ static struct padata_instance *padata_alloc(const char *name,
+ 	mutex_init(&pinst->lock);
+ 
+ #ifdef CONFIG_HOTPLUG_CPU
+-	cpuhp_state_add_instance_nocalls_cpuslocked(hp_online, &pinst->node);
++	cpuhp_state_add_instance_nocalls_cpuslocked(hp_online,
++						    &pinst->cpu_online_node);
+ 	cpuhp_state_add_instance_nocalls_cpuslocked(CPUHP_PADATA_DEAD,
+-						    &pinst->node);
++						    &pinst->cpu_dead_node);
+ #endif
+ 
+ 	put_online_cpus();
+diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
+index 603d3d3cbf77..efb15f0f464b 100644
+--- a/kernel/sched/fair.c
++++ b/kernel/sched/fair.c
+@@ -2682,7 +2682,7 @@ static void task_tick_numa(struct rq *rq, struct task_struct *curr)
+ 	/*
+ 	 * We don't care about NUMA placement if we don't have memory.
+ 	 */
+-	if (!curr->mm || (curr->flags & PF_EXITING) || work->next != work)
++	if ((curr->flags & (PF_EXITING | PF_KTHREAD)) || work->next != work)
+ 		return;
+ 
+ 	/*
+diff --git a/lib/bitmap.c b/lib/bitmap.c
+index 89260aa342d6..972eb01f4d0b 100644
+--- a/lib/bitmap.c
++++ b/lib/bitmap.c
+@@ -740,8 +740,9 @@ int bitmap_parse(const char *start, unsigned int buflen,
+ 	int chunks = BITS_TO_U32(nmaskbits);
+ 	u32 *bitmap = (u32 *)maskp;
+ 	int unset_bit;
++	int chunk;
+ 
+-	while (1) {
++	for (chunk = 0; ; chunk++) {
+ 		end = bitmap_find_region_reverse(start, end);
+ 		if (start > end)
+ 			break;
+@@ -749,7 +750,11 @@ int bitmap_parse(const char *start, unsigned int buflen,
+ 		if (!chunks--)
+ 			return -EOVERFLOW;
+ 
+-		end = bitmap_get_x32_reverse(start, end, bitmap++);
++#if defined(CONFIG_64BIT) && defined(__BIG_ENDIAN)
++		end = bitmap_get_x32_reverse(start, end, &bitmap[chunk ^ 1]);
++#else
++		end = bitmap_get_x32_reverse(start, end, &bitmap[chunk]);
++#endif
+ 		if (IS_ERR(end))
+ 			return PTR_ERR(end);
+ 	}
+diff --git a/lib/lzo/lzo1x_compress.c b/lib/lzo/lzo1x_compress.c
+index 717c940112f9..8ad5ba2b86e2 100644
+--- a/lib/lzo/lzo1x_compress.c
++++ b/lib/lzo/lzo1x_compress.c
+@@ -268,6 +268,19 @@ m_len_done:
+ 				*op++ = (M4_MARKER | ((m_off >> 11) & 8)
+ 						| (m_len - 2));
+ 			else {
++				if (unlikely(((m_off & 0x403f) == 0x403f)
++						&& (m_len >= 261)
++						&& (m_len <= 264))
++						&& likely(bitstream_version)) {
++					// Under lzo-rle, block copies
++					// for 261 <= length <= 264 and
++					// (distance & 0x80f3) == 0x80f3
++					// can result in ambiguous
++					// output. Adjust length
++					// to 260 to prevent ambiguity.
++					ip -= m_len - 260;
++					m_len = 260;
++				}
+ 				m_len -= M4_MAX_LEN;
+ 				*op++ = (M4_MARKER | ((m_off >> 11) & 8));
+ 				while (unlikely(m_len > 255)) {
+diff --git a/mm/gup.c b/mm/gup.c
+index 1b521e0ac1de..b6a214e405f6 100644
+--- a/mm/gup.c
++++ b/mm/gup.c
+@@ -176,13 +176,22 @@ static int follow_pfn_pte(struct vm_area_struct *vma, unsigned long address,
+ }
+ 
+ /*
+- * FOLL_FORCE can write to even unwritable pte's, but only
+- * after we've gone through a COW cycle and they are dirty.
++ * FOLL_FORCE or a forced COW break can write even to unwritable pte's,
++ * but only after we've gone through a COW cycle and they are dirty.
+  */
+ static inline bool can_follow_write_pte(pte_t pte, unsigned int flags)
+ {
+-	return pte_write(pte) ||
+-		((flags & FOLL_FORCE) && (flags & FOLL_COW) && pte_dirty(pte));
++	return pte_write(pte) || ((flags & FOLL_COW) && pte_dirty(pte));
++}
++
++/*
++ * A (separate) COW fault might break the page the other way and
++ * get_user_pages() would return the page from what is now the wrong
++ * VM. So we need to force a COW break at GUP time even for reads.
++ */
++static inline bool should_force_cow_break(struct vm_area_struct *vma, unsigned int flags)
++{
++	return is_cow_mapping(vma->vm_flags) && (flags & (FOLL_GET | FOLL_PIN));
+ }
+ 
+ static struct page *follow_page_pte(struct vm_area_struct *vma,
+@@ -848,12 +857,18 @@ static long __get_user_pages(struct task_struct *tsk, struct mm_struct *mm,
+ 				goto out;
+ 			}
+ 			if (is_vm_hugetlb_page(vma)) {
++				if (should_force_cow_break(vma, foll_flags))
++					foll_flags |= FOLL_WRITE;
+ 				i = follow_hugetlb_page(mm, vma, pages, vmas,
+ 						&start, &nr_pages, i,
+-						gup_flags, nonblocking);
++						foll_flags, nonblocking);
+ 				continue;
+ 			}
+ 		}
++
++		if (should_force_cow_break(vma, foll_flags))
++			foll_flags |= FOLL_WRITE;
++
+ retry:
+ 		/*
+ 		 * If we have a pending SIGKILL, don't keep faulting pages and
+@@ -2364,6 +2379,10 @@ static bool gup_fast_permitted(unsigned long start, unsigned long end)
+  *
+  * If the architecture does not support this function, simply return with no
+  * pages pinned.
++ *
++ * Careful, careful! COW breaking can go either way, so a non-write
++ * access can get ambiguous page results. If you call this function without
++ * 'write' set, you'd better be sure that you're ok with that ambiguity.
+  */
+ int __get_user_pages_fast(unsigned long start, int nr_pages, int write,
+ 			  struct page **pages)
+@@ -2391,6 +2410,12 @@ int __get_user_pages_fast(unsigned long start, int nr_pages, int write,
+ 	 *
+ 	 * We do not adopt an rcu_read_lock(.) here as we also want to
+ 	 * block IPIs that come from THPs splitting.
++	 *
++	 * NOTE! We allow read-only gup_fast() here, but you'd better be
++	 * careful about possible COW pages. You'll get _a_ COW page, but
++	 * not necessarily the one you intended to get depending on what
++	 * COW event happens after this. COW may break the page copy in a
++	 * random direction.
+ 	 */
+ 
+ 	if (IS_ENABLED(CONFIG_HAVE_FAST_GUP) &&
+@@ -2448,10 +2473,17 @@ static int internal_get_user_pages_fast(unsigned long start, int nr_pages,
+ 	if (unlikely(!access_ok((void __user *)start, len)))
+ 		return -EFAULT;
+ 
++	/*
++	 * The FAST_GUP case requires FOLL_WRITE even for pure reads,
++	 * because get_user_pages() may need to cause an early COW in
++	 * order to avoid confusing the normal COW routines. So only
++	 * targets that are already writable are safe to do by just
++	 * looking at the page tables.
++	 */
+ 	if (IS_ENABLED(CONFIG_HAVE_FAST_GUP) &&
+ 	    gup_fast_permitted(start, end)) {
+ 		local_irq_disable();
+-		gup_pgd_range(addr, end, gup_flags, pages, &nr);
++		gup_pgd_range(addr, end, gup_flags | FOLL_WRITE, pages, &nr);
+ 		local_irq_enable();
+ 		ret = nr;
+ 	}
+diff --git a/mm/huge_memory.c b/mm/huge_memory.c
+index 24ad53b4dfc0..4ffaeb9dd4af 100644
+--- a/mm/huge_memory.c
++++ b/mm/huge_memory.c
+@@ -1465,13 +1465,12 @@ out_unlock:
+ }
+ 
+ /*
+- * FOLL_FORCE can write to even unwritable pmd's, but only
+- * after we've gone through a COW cycle and they are dirty.
++ * FOLL_FORCE or a forced COW break can write even to unwritable pmd's,
++ * but only after we've gone through a COW cycle and they are dirty.
+  */
+ static inline bool can_follow_write_pmd(pmd_t pmd, unsigned int flags)
+ {
+-	return pmd_write(pmd) ||
+-	       ((flags & FOLL_FORCE) && (flags & FOLL_COW) && pmd_dirty(pmd));
++	return pmd_write(pmd) || ((flags & FOLL_COW) && pmd_dirty(pmd));
+ }
+ 
+ struct page *follow_trans_huge_pmd(struct vm_area_struct *vma,
+diff --git a/mm/ptdump.c b/mm/ptdump.c
+index 26208d0d03b7..f4ce916f5602 100644
+--- a/mm/ptdump.c
++++ b/mm/ptdump.c
+@@ -36,6 +36,9 @@ static int ptdump_pgd_entry(pgd_t *pgd, unsigned long addr,
+ 		return note_kasan_page_table(walk, addr);
+ #endif
+ 
++	if (st->effective_prot)
++		st->effective_prot(st, 0, pgd_val(val));
++
+ 	if (pgd_leaf(val))
+ 		st->note_page(st, addr, 0, pgd_val(val));
+ 
+@@ -53,6 +56,9 @@ static int ptdump_p4d_entry(p4d_t *p4d, unsigned long addr,
+ 		return note_kasan_page_table(walk, addr);
+ #endif
+ 
++	if (st->effective_prot)
++		st->effective_prot(st, 1, p4d_val(val));
++
+ 	if (p4d_leaf(val))
+ 		st->note_page(st, addr, 1, p4d_val(val));
+ 
+@@ -70,6 +76,9 @@ static int ptdump_pud_entry(pud_t *pud, unsigned long addr,
+ 		return note_kasan_page_table(walk, addr);
+ #endif
+ 
++	if (st->effective_prot)
++		st->effective_prot(st, 2, pud_val(val));
++
+ 	if (pud_leaf(val))
+ 		st->note_page(st, addr, 2, pud_val(val));
+ 
+@@ -87,6 +96,8 @@ static int ptdump_pmd_entry(pmd_t *pmd, unsigned long addr,
+ 		return note_kasan_page_table(walk, addr);
+ #endif
+ 
++	if (st->effective_prot)
++		st->effective_prot(st, 3, pmd_val(val));
+ 	if (pmd_leaf(val))
+ 		st->note_page(st, addr, 3, pmd_val(val));
+ 
+@@ -97,8 +108,12 @@ static int ptdump_pte_entry(pte_t *pte, unsigned long addr,
+ 			    unsigned long next, struct mm_walk *walk)
+ {
+ 	struct ptdump_state *st = walk->private;
++	pte_t val = READ_ONCE(*pte);
++
++	if (st->effective_prot)
++		st->effective_prot(st, 4, pte_val(val));
+ 
+-	st->note_page(st, addr, 4, pte_val(READ_ONCE(*pte)));
++	st->note_page(st, addr, 4, pte_val(val));
+ 
+ 	return 0;
+ }
+diff --git a/mm/slab_common.c b/mm/slab_common.c
+index 1907cb2903c7..4b045f12177f 100644
+--- a/mm/slab_common.c
++++ b/mm/slab_common.c
+@@ -1303,7 +1303,8 @@ void __init create_kmalloc_caches(slab_flags_t flags)
+ 			kmalloc_caches[KMALLOC_DMA][i] = create_kmalloc_cache(
+ 				kmalloc_info[i].name[KMALLOC_DMA],
+ 				kmalloc_info[i].size,
+-				SLAB_CACHE_DMA | flags, 0, 0);
++				SLAB_CACHE_DMA | flags, 0,
++				kmalloc_info[i].size);
+ 		}
+ 	}
+ #endif
+diff --git a/mm/slub.c b/mm/slub.c
+index 3b17e774831a..fd886d24ee29 100644
+--- a/mm/slub.c
++++ b/mm/slub.c
+@@ -5778,8 +5778,10 @@ static int sysfs_slab_add(struct kmem_cache *s)
+ 
+ 	s->kobj.kset = kset;
+ 	err = kobject_init_and_add(&s->kobj, &slab_ktype, NULL, "%s", name);
+-	if (err)
++	if (err) {
++		kobject_put(&s->kobj);
+ 		goto out;
++	}
+ 
+ 	err = sysfs_create_group(&s->kobj, &slab_attr_group);
+ 	if (err)
+diff --git a/mm/util.c b/mm/util.c
+index 988d11e6c17c..dc1c877d5481 100644
+--- a/mm/util.c
++++ b/mm/util.c
+@@ -604,6 +604,24 @@ void kvfree(const void *addr)
+ }
+ EXPORT_SYMBOL(kvfree);
+ 
++/**
++ * kvfree_sensitive - Free a data object containing sensitive information.
++ * @addr: address of the data object to be freed.
++ * @len: length of the data object.
++ *
++ * Use the special memzero_explicit() function to clear the content of a
++ * kvmalloc'ed object containing sensitive data to make sure that the
++ * compiler won't optimize out the data clearing.
++ */
++void kvfree_sensitive(const void *addr, size_t len)
++{
++	if (likely(!ZERO_OR_NULL_PTR(addr))) {
++		memzero_explicit((void *)addr, len);
++		kvfree(addr);
++	}
++}
++EXPORT_SYMBOL(kvfree_sensitive);
++
+ static inline void *__page_rmapping(struct page *page)
+ {
+ 	unsigned long mapping;
+diff --git a/net/bridge/br_arp_nd_proxy.c b/net/bridge/br_arp_nd_proxy.c
+index 37908561a64b..b18cdf03edb3 100644
+--- a/net/bridge/br_arp_nd_proxy.c
++++ b/net/bridge/br_arp_nd_proxy.c
+@@ -276,6 +276,10 @@ static void br_nd_send(struct net_bridge *br, struct net_bridge_port *p,
+ 	ns_olen = request->len - (skb_network_offset(request) +
+ 				  sizeof(struct ipv6hdr)) - sizeof(*ns);
+ 	for (i = 0; i < ns_olen - 1; i += (ns->opt[i + 1] << 3)) {
++		if (!ns->opt[i + 1]) {
++			kfree_skb(reply);
++			return;
++		}
+ 		if (ns->opt[i] == ND_OPT_SOURCE_LL_ADDR) {
+ 			daddr = ns->opt + i + sizeof(struct nd_opt_hdr);
+ 			break;
+diff --git a/net/dccp/proto.c b/net/dccp/proto.c
+index 4af8a98fe784..c13b6609474b 100644
+--- a/net/dccp/proto.c
++++ b/net/dccp/proto.c
+@@ -1139,14 +1139,14 @@ static int __init dccp_init(void)
+ 	inet_hashinfo_init(&dccp_hashinfo);
+ 	rc = inet_hashinfo2_init_mod(&dccp_hashinfo);
+ 	if (rc)
+-		goto out_fail;
++		goto out_free_percpu;
+ 	rc = -ENOBUFS;
+ 	dccp_hashinfo.bind_bucket_cachep =
+ 		kmem_cache_create("dccp_bind_bucket",
+ 				  sizeof(struct inet_bind_bucket), 0,
+ 				  SLAB_HWCACHE_ALIGN, NULL);
+ 	if (!dccp_hashinfo.bind_bucket_cachep)
+-		goto out_free_percpu;
++		goto out_free_hashinfo2;
+ 
+ 	/*
+ 	 * Size and allocate the main established and bind bucket
+@@ -1242,6 +1242,8 @@ out_free_dccp_ehash:
+ 	free_pages((unsigned long)dccp_hashinfo.ehash, ehash_order);
+ out_free_bind_bucket_cachep:
+ 	kmem_cache_destroy(dccp_hashinfo.bind_bucket_cachep);
++out_free_hashinfo2:
++	inet_hashinfo2_free_mod(&dccp_hashinfo);
+ out_free_percpu:
+ 	percpu_counter_destroy(&dccp_orphan_count);
+ out_fail:
+@@ -1265,6 +1267,7 @@ static void __exit dccp_fini(void)
+ 	kmem_cache_destroy(dccp_hashinfo.bind_bucket_cachep);
+ 	dccp_ackvec_exit();
+ 	dccp_sysctl_exit();
++	inet_hashinfo2_free_mod(&dccp_hashinfo);
+ 	percpu_counter_destroy(&dccp_orphan_count);
+ }
+ 
+diff --git a/net/ipv6/ipv6_sockglue.c b/net/ipv6/ipv6_sockglue.c
+index 18d05403d3b5..5af97b4f5df3 100644
+--- a/net/ipv6/ipv6_sockglue.c
++++ b/net/ipv6/ipv6_sockglue.c
+@@ -183,14 +183,15 @@ static int do_ipv6_setsockopt(struct sock *sk, int level, int optname,
+ 					retv = -EBUSY;
+ 					break;
+ 				}
+-			}
+-			if (sk->sk_protocol == IPPROTO_TCP &&
+-			    sk->sk_prot != &tcpv6_prot) {
+-				retv = -EBUSY;
++			} else if (sk->sk_protocol == IPPROTO_TCP) {
++				if (sk->sk_prot != &tcpv6_prot) {
++					retv = -EBUSY;
++					break;
++				}
++			} else {
+ 				break;
+ 			}
+-			if (sk->sk_protocol != IPPROTO_TCP)
+-				break;
++
+ 			if (sk->sk_state != TCP_ESTABLISHED) {
+ 				retv = -ENOTCONN;
+ 				break;
+diff --git a/net/netlink/genetlink.c b/net/netlink/genetlink.c
+index 9f357aa22b94..bcbba0bef1c2 100644
+--- a/net/netlink/genetlink.c
++++ b/net/netlink/genetlink.c
+@@ -513,15 +513,58 @@ static void genl_family_rcv_msg_attrs_free(const struct genl_family *family,
+ 		kfree(attrbuf);
+ }
+ 
+-static int genl_lock_start(struct netlink_callback *cb)
++struct genl_start_context {
++	const struct genl_family *family;
++	struct nlmsghdr *nlh;
++	struct netlink_ext_ack *extack;
++	const struct genl_ops *ops;
++	int hdrlen;
++};
++
++static int genl_start(struct netlink_callback *cb)
+ {
+-	const struct genl_ops *ops = genl_dumpit_info(cb)->ops;
++	struct genl_start_context *ctx = cb->data;
++	const struct genl_ops *ops = ctx->ops;
++	struct genl_dumpit_info *info;
++	struct nlattr **attrs = NULL;
+ 	int rc = 0;
+ 
++	if (ops->validate & GENL_DONT_VALIDATE_DUMP)
++		goto no_attrs;
++
++	if (ctx->nlh->nlmsg_len < nlmsg_msg_size(ctx->hdrlen))
++		return -EINVAL;
++
++	attrs = genl_family_rcv_msg_attrs_parse(ctx->family, ctx->nlh, ctx->extack,
++						ops, ctx->hdrlen,
++						GENL_DONT_VALIDATE_DUMP_STRICT,
++						true);
++	if (IS_ERR(attrs))
++		return PTR_ERR(attrs);
++
++no_attrs:
++	info = genl_dumpit_info_alloc();
++	if (!info) {
++		kfree(attrs);
++		return -ENOMEM;
++	}
++	info->family = ctx->family;
++	info->ops = ops;
++	info->attrs = attrs;
++
++	cb->data = info;
+ 	if (ops->start) {
+-		genl_lock();
++		if (!ctx->family->parallel_ops)
++			genl_lock();
+ 		rc = ops->start(cb);
+-		genl_unlock();
++		if (!ctx->family->parallel_ops)
++			genl_unlock();
++	}
++
++	if (rc) {
++		kfree(attrs);
++		genl_dumpit_info_free(info);
++		cb->data = NULL;
+ 	}
+ 	return rc;
+ }
+@@ -548,7 +591,7 @@ static int genl_lock_done(struct netlink_callback *cb)
+ 		rc = ops->done(cb);
+ 		genl_unlock();
+ 	}
+-	genl_family_rcv_msg_attrs_free(info->family, info->attrs, true);
++	genl_family_rcv_msg_attrs_free(info->family, info->attrs, false);
+ 	genl_dumpit_info_free(info);
+ 	return rc;
+ }
+@@ -573,43 +616,23 @@ static int genl_family_rcv_msg_dumpit(const struct genl_family *family,
+ 				      const struct genl_ops *ops,
+ 				      int hdrlen, struct net *net)
+ {
+-	struct genl_dumpit_info *info;
+-	struct nlattr **attrs = NULL;
++	struct genl_start_context ctx;
+ 	int err;
+ 
+ 	if (!ops->dumpit)
+ 		return -EOPNOTSUPP;
+ 
+-	if (ops->validate & GENL_DONT_VALIDATE_DUMP)
+-		goto no_attrs;
+-
+-	if (nlh->nlmsg_len < nlmsg_msg_size(hdrlen))
+-		return -EINVAL;
+-
+-	attrs = genl_family_rcv_msg_attrs_parse(family, nlh, extack,
+-						ops, hdrlen,
+-						GENL_DONT_VALIDATE_DUMP_STRICT,
+-						true);
+-	if (IS_ERR(attrs))
+-		return PTR_ERR(attrs);
+-
+-no_attrs:
+-	/* Allocate dumpit info. It is going to be freed by done() callback. */
+-	info = genl_dumpit_info_alloc();
+-	if (!info) {
+-		genl_family_rcv_msg_attrs_free(family, attrs, true);
+-		return -ENOMEM;
+-	}
+-
+-	info->family = family;
+-	info->ops = ops;
+-	info->attrs = attrs;
++	ctx.family = family;
++	ctx.nlh = nlh;
++	ctx.extack = extack;
++	ctx.ops = ops;
++	ctx.hdrlen = hdrlen;
+ 
+ 	if (!family->parallel_ops) {
+ 		struct netlink_dump_control c = {
+ 			.module = family->module,
+-			.data = info,
+-			.start = genl_lock_start,
++			.data = &ctx,
++			.start = genl_start,
+ 			.dump = genl_lock_dumpit,
+ 			.done = genl_lock_done,
+ 		};
+@@ -617,12 +640,11 @@ no_attrs:
+ 		genl_unlock();
+ 		err = __netlink_dump_start(net->genl_sock, skb, nlh, &c);
+ 		genl_lock();
+-
+ 	} else {
+ 		struct netlink_dump_control c = {
+ 			.module = family->module,
+-			.data = info,
+-			.start = ops->start,
++			.data = &ctx,
++			.start = genl_start,
+ 			.dump = ops->dumpit,
+ 			.done = genl_parallel_done,
+ 		};
+diff --git a/net/tipc/msg.c b/net/tipc/msg.c
+index 0d515d20b056..bf17b13009d1 100644
+--- a/net/tipc/msg.c
++++ b/net/tipc/msg.c
+@@ -221,7 +221,7 @@ int tipc_msg_append(struct tipc_msg *_hdr, struct msghdr *m, int dlen,
+ 	accounted = skb ? msg_blocks(buf_msg(skb)) : 0;
+ 	total = accounted;
+ 
+-	while (rem) {
++	do {
+ 		if (!skb || skb->len >= mss) {
+ 			prev = skb;
+ 			skb = tipc_buf_acquire(mss, GFP_KERNEL);
+@@ -249,7 +249,7 @@ int tipc_msg_append(struct tipc_msg *_hdr, struct msghdr *m, int dlen,
+ 		skb_put(skb, cpy);
+ 		rem -= cpy;
+ 		total += msg_blocks(hdr) - curr;
+-	}
++	} while (rem);
+ 	return total - accounted;
+ }
+ 
+diff --git a/scripts/link-vmlinux.sh b/scripts/link-vmlinux.sh
+index ac569e197bfa..d09ab4afbda4 100755
+--- a/scripts/link-vmlinux.sh
++++ b/scripts/link-vmlinux.sh
+@@ -113,9 +113,6 @@ vmlinux_link()
+ gen_btf()
+ {
+ 	local pahole_ver
+-	local bin_arch
+-	local bin_format
+-	local bin_file
+ 
+ 	if ! [ -x "$(command -v ${PAHOLE})" ]; then
+ 		echo >&2 "BTF: ${1}: pahole (${PAHOLE}) is not available"
+@@ -133,17 +130,16 @@ gen_btf()
+ 	info "BTF" ${2}
+ 	LLVM_OBJCOPY=${OBJCOPY} ${PAHOLE} -J ${1}
+ 
+-	# dump .BTF section into raw binary file to link with final vmlinux
+-	bin_arch=$(LANG=C ${OBJDUMP} -f ${1} | grep architecture | \
+-		cut -d, -f1 | cut -d' ' -f2)
+-	bin_format=$(LANG=C ${OBJDUMP} -f ${1} | grep 'file format' | \
+-		awk '{print $4}')
+-	bin_file=.btf.vmlinux.bin
+-	${OBJCOPY} --change-section-address .BTF=0 \
+-		--set-section-flags .BTF=alloc -O binary \
+-		--only-section=.BTF ${1} $bin_file
+-	${OBJCOPY} -I binary -O ${bin_format} -B ${bin_arch} \
+-		--rename-section .data=.BTF $bin_file ${2}
++	# Create ${2} which contains just .BTF section but no symbols. Add
++	# SHF_ALLOC because .BTF will be part of the vmlinux image. --strip-all
++	# deletes all symbols including __start_BTF and __stop_BTF, which will
++	# be redefined in the linker script. Add 2>/dev/null to suppress GNU
++	# objcopy warnings: "empty loadable segment detected at ..."
++	${OBJCOPY} --only-section=.BTF --set-section-flags .BTF=alloc,readonly \
++		--strip-all ${1} ${2} 2>/dev/null
++	# Change e_type to ET_REL so that it can be used to link final vmlinux.
++	# Unlike GNU ld, lld does not allow an ET_EXEC input.
++	printf '\1' | dd of=${2} conv=notrunc bs=1 seek=16 status=none
+ }
+ 
+ # Create ${2} .o file with all symbols from the ${1} object file
+diff --git a/security/keys/internal.h b/security/keys/internal.h
+index 6d0ca48ae9a5..153d35c20d3d 100644
+--- a/security/keys/internal.h
++++ b/security/keys/internal.h
+@@ -350,15 +350,4 @@ static inline void key_check(const struct key *key)
+ #define key_check(key) do {} while(0)
+ 
+ #endif
+-
+-/*
+- * Helper function to clear and free a kvmalloc'ed memory object.
+- */
+-static inline void __kvzfree(const void *addr, size_t len)
+-{
+-	if (addr) {
+-		memset((void *)addr, 0, len);
+-		kvfree(addr);
+-	}
+-}
+ #endif /* _INTERNAL_H */
+diff --git a/security/keys/keyctl.c b/security/keys/keyctl.c
+index 5e01192e222a..edde63a63007 100644
+--- a/security/keys/keyctl.c
++++ b/security/keys/keyctl.c
+@@ -142,10 +142,7 @@ SYSCALL_DEFINE5(add_key, const char __user *, _type,
+ 
+ 	key_ref_put(keyring_ref);
+  error3:
+-	if (payload) {
+-		memzero_explicit(payload, plen);
+-		kvfree(payload);
+-	}
++	kvfree_sensitive(payload, plen);
+  error2:
+ 	kfree(description);
+  error:
+@@ -360,7 +357,7 @@ long keyctl_update_key(key_serial_t id,
+ 
+ 	key_ref_put(key_ref);
+ error2:
+-	__kvzfree(payload, plen);
++	kvfree_sensitive(payload, plen);
+ error:
+ 	return ret;
+ }
+@@ -914,7 +911,7 @@ can_read_key:
+ 		 */
+ 		if (ret > key_data_len) {
+ 			if (unlikely(key_data))
+-				__kvzfree(key_data, key_data_len);
++				kvfree_sensitive(key_data, key_data_len);
+ 			key_data_len = ret;
+ 			continue;	/* Allocate buffer */
+ 		}
+@@ -923,7 +920,7 @@ can_read_key:
+ 			ret = -EFAULT;
+ 		break;
+ 	}
+-	__kvzfree(key_data, key_data_len);
++	kvfree_sensitive(key_data, key_data_len);
+ 
+ key_put_out:
+ 	key_put(key);
+@@ -1225,10 +1222,7 @@ long keyctl_instantiate_key_common(key_serial_t id,
+ 		keyctl_change_reqkey_auth(NULL);
+ 
+ error2:
+-	if (payload) {
+-		memzero_explicit(payload, plen);
+-		kvfree(payload);
+-	}
++	kvfree_sensitive(payload, plen);
+ error:
+ 	return ret;
+ }
+diff --git a/security/smack/smack.h b/security/smack/smack.h
+index 62529f382942..335d2411abe4 100644
+--- a/security/smack/smack.h
++++ b/security/smack/smack.h
+@@ -148,7 +148,6 @@ struct smk_net4addr {
+ 	struct smack_known	*smk_label;	/* label */
+ };
+ 
+-#if IS_ENABLED(CONFIG_IPV6)
+ /*
+  * An entry in the table identifying IPv6 hosts.
+  */
+@@ -159,9 +158,7 @@ struct smk_net6addr {
+ 	int			smk_masks;	/* mask size */
+ 	struct smack_known	*smk_label;	/* label */
+ };
+-#endif /* CONFIG_IPV6 */
+ 
+-#ifdef SMACK_IPV6_PORT_LABELING
+ /*
+  * An entry in the table identifying ports.
+  */
+@@ -174,7 +171,6 @@ struct smk_port_label {
+ 	short			smk_sock_type;	/* Socket type */
+ 	short			smk_can_reuse;
+ };
+-#endif /* SMACK_IPV6_PORT_LABELING */
+ 
+ struct smack_known_list_elem {
+ 	struct list_head	list;
+@@ -335,9 +331,7 @@ extern struct smack_known smack_known_web;
+ extern struct mutex	smack_known_lock;
+ extern struct list_head smack_known_list;
+ extern struct list_head smk_net4addr_list;
+-#if IS_ENABLED(CONFIG_IPV6)
+ extern struct list_head smk_net6addr_list;
+-#endif /* CONFIG_IPV6 */
+ 
+ extern struct mutex     smack_onlycap_lock;
+ extern struct list_head smack_onlycap_list;
+diff --git a/security/smack/smack_lsm.c b/security/smack/smack_lsm.c
+index 8c61d175e195..14bf2f4aea3b 100644
+--- a/security/smack/smack_lsm.c
++++ b/security/smack/smack_lsm.c
+@@ -50,10 +50,8 @@
+ #define SMK_RECEIVING	1
+ #define SMK_SENDING	2
+ 
+-#ifdef SMACK_IPV6_PORT_LABELING
+-DEFINE_MUTEX(smack_ipv6_lock);
++static DEFINE_MUTEX(smack_ipv6_lock);
+ static LIST_HEAD(smk_ipv6_port_list);
+-#endif
+ static struct kmem_cache *smack_inode_cache;
+ struct kmem_cache *smack_rule_cache;
+ int smack_enabled;
+@@ -2320,7 +2318,6 @@ static struct smack_known *smack_ipv4host_label(struct sockaddr_in *sip)
+ 	return NULL;
+ }
+ 
+-#if IS_ENABLED(CONFIG_IPV6)
+ /*
+  * smk_ipv6_localhost - Check for local ipv6 host address
+  * @sip: the address
+@@ -2388,7 +2385,6 @@ static struct smack_known *smack_ipv6host_label(struct sockaddr_in6 *sip)
+ 
+ 	return NULL;
+ }
+-#endif /* CONFIG_IPV6 */
+ 
+ /**
+  * smack_netlabel - Set the secattr on a socket
+@@ -2477,7 +2473,6 @@ static int smack_netlabel_send(struct sock *sk, struct sockaddr_in *sap)
+ 	return smack_netlabel(sk, sk_lbl);
+ }
+ 
+-#if IS_ENABLED(CONFIG_IPV6)
+ /**
+  * smk_ipv6_check - check Smack access
+  * @subject: subject Smack label
+@@ -2510,7 +2505,6 @@ static int smk_ipv6_check(struct smack_known *subject,
+ 	rc = smk_bu_note("IPv6 check", subject, object, MAY_WRITE, rc);
+ 	return rc;
+ }
+-#endif /* CONFIG_IPV6 */
+ 
+ #ifdef SMACK_IPV6_PORT_LABELING
+ /**
+@@ -2599,6 +2593,7 @@ static void smk_ipv6_port_label(struct socket *sock, struct sockaddr *address)
+ 	mutex_unlock(&smack_ipv6_lock);
+ 	return;
+ }
++#endif
+ 
+ /**
+  * smk_ipv6_port_check - check Smack port access
+@@ -2661,7 +2656,6 @@ static int smk_ipv6_port_check(struct sock *sk, struct sockaddr_in6 *address,
+ 
+ 	return smk_ipv6_check(skp, object, address, act);
+ }
+-#endif /* SMACK_IPV6_PORT_LABELING */
+ 
+ /**
+  * smack_inode_setsecurity - set smack xattrs
+@@ -2836,24 +2830,21 @@ static int smack_socket_connect(struct socket *sock, struct sockaddr *sap,
+ 		return 0;
+ 	if (IS_ENABLED(CONFIG_IPV6) && sap->sa_family == AF_INET6) {
+ 		struct sockaddr_in6 *sip = (struct sockaddr_in6 *)sap;
+-#ifdef SMACK_IPV6_SECMARK_LABELING
+-		struct smack_known *rsp;
+-#endif
++		struct smack_known *rsp = NULL;
+ 
+ 		if (addrlen < SIN6_LEN_RFC2133)
+ 			return 0;
+-#ifdef SMACK_IPV6_SECMARK_LABELING
+-		rsp = smack_ipv6host_label(sip);
++		if (__is_defined(SMACK_IPV6_SECMARK_LABELING))
++			rsp = smack_ipv6host_label(sip);
+ 		if (rsp != NULL) {
+ 			struct socket_smack *ssp = sock->sk->sk_security;
+ 
+ 			rc = smk_ipv6_check(ssp->smk_out, rsp, sip,
+ 					    SMK_CONNECTING);
+ 		}
+-#endif
+-#ifdef SMACK_IPV6_PORT_LABELING
+-		rc = smk_ipv6_port_check(sock->sk, sip, SMK_CONNECTING);
+-#endif
++		if (__is_defined(SMACK_IPV6_PORT_LABELING))
++			rc = smk_ipv6_port_check(sock->sk, sip, SMK_CONNECTING);
++
+ 		return rc;
+ 	}
+ 	if (sap->sa_family != AF_INET || addrlen < sizeof(struct sockaddr_in))
+diff --git a/security/smack/smackfs.c b/security/smack/smackfs.c
+index e3e05c04dbd1..c21b656b3263 100644
+--- a/security/smack/smackfs.c
++++ b/security/smack/smackfs.c
+@@ -878,11 +878,21 @@ static ssize_t smk_set_cipso(struct file *file, const char __user *buf,
+ 	else
+ 		rule += strlen(skp->smk_known) + 1;
+ 
++	if (rule > data + count) {
++		rc = -EOVERFLOW;
++		goto out;
++	}
++
+ 	ret = sscanf(rule, "%d", &maplevel);
+ 	if (ret != 1 || maplevel > SMACK_CIPSO_MAXLEVEL)
+ 		goto out;
+ 
+ 	rule += SMK_DIGITLEN;
++	if (rule > data + count) {
++		rc = -EOVERFLOW;
++		goto out;
++	}
++
+ 	ret = sscanf(rule, "%d", &catlen);
+ 	if (ret != 1 || catlen > SMACK_CIPSO_MAXCATNUM)
+ 		goto out;
+diff --git a/sound/core/pcm_native.c b/sound/core/pcm_native.c
+index d5443eeb8b63..c936976e0e7b 100644
+--- a/sound/core/pcm_native.c
++++ b/sound/core/pcm_native.c
+@@ -138,6 +138,16 @@ void snd_pcm_stream_lock_irq(struct snd_pcm_substream *substream)
+ }
+ EXPORT_SYMBOL_GPL(snd_pcm_stream_lock_irq);
+ 
++static void snd_pcm_stream_lock_nested(struct snd_pcm_substream *substream)
++{
++	struct snd_pcm_group *group = &substream->self_group;
++
++	if (substream->pcm->nonatomic)
++		mutex_lock_nested(&group->mutex, SINGLE_DEPTH_NESTING);
++	else
++		spin_lock_nested(&group->lock, SINGLE_DEPTH_NESTING);
++}
++
+ /**
+  * snd_pcm_stream_unlock_irq - Unlock the PCM stream
+  * @substream: PCM substream
+@@ -2163,6 +2173,12 @@ static int snd_pcm_link(struct snd_pcm_substream *substream, int fd)
+ 	}
+ 	pcm_file = f.file->private_data;
+ 	substream1 = pcm_file->substream;
++
++	if (substream == substream1) {
++		res = -EINVAL;
++		goto _badf;
++	}
++
+ 	group = kzalloc(sizeof(*group), GFP_KERNEL);
+ 	if (!group) {
+ 		res = -ENOMEM;
+@@ -2191,7 +2207,7 @@ static int snd_pcm_link(struct snd_pcm_substream *substream, int fd)
+ 	snd_pcm_stream_unlock_irq(substream);
+ 
+ 	snd_pcm_group_lock_irq(target_group, nonatomic);
+-	snd_pcm_stream_lock(substream1);
++	snd_pcm_stream_lock_nested(substream1);
+ 	snd_pcm_group_assign(substream1, target_group);
+ 	refcount_inc(&target_group->refs);
+ 	snd_pcm_stream_unlock(substream1);
+@@ -2207,7 +2223,7 @@ static int snd_pcm_link(struct snd_pcm_substream *substream, int fd)
+ 
+ static void relink_to_local(struct snd_pcm_substream *substream)
+ {
+-	snd_pcm_stream_lock(substream);
++	snd_pcm_stream_lock_nested(substream);
+ 	snd_pcm_group_assign(substream, &substream->self_group);
+ 	snd_pcm_stream_unlock(substream);
+ }
+diff --git a/sound/firewire/fireface/ff-protocol-latter.c b/sound/firewire/fireface/ff-protocol-latter.c
+index 0e4c3a9ed5e4..76ae568489ef 100644
+--- a/sound/firewire/fireface/ff-protocol-latter.c
++++ b/sound/firewire/fireface/ff-protocol-latter.c
+@@ -107,18 +107,18 @@ static int latter_allocate_resources(struct snd_ff *ff, unsigned int rate)
+ 	int err;
+ 
+ 	// Set the number of data blocks transferred in a second.
+-	if (rate % 32000 == 0)
+-		code = 0x00;
++	if (rate % 48000 == 0)
++		code = 0x04;
+ 	else if (rate % 44100 == 0)
+ 		code = 0x02;
+-	else if (rate % 48000 == 0)
+-		code = 0x04;
++	else if (rate % 32000 == 0)
++		code = 0x00;
+ 	else
+ 		return -EINVAL;
+ 
+ 	if (rate >= 64000 && rate < 128000)
+ 		code |= 0x08;
+-	else if (rate >= 128000 && rate < 192000)
++	else if (rate >= 128000)
+ 		code |= 0x10;
+ 
+ 	reg = cpu_to_le32(code);
+@@ -140,7 +140,7 @@ static int latter_allocate_resources(struct snd_ff *ff, unsigned int rate)
+ 		if (curr_rate == rate)
+ 			break;
+ 	}
+-	if (count == 10)
++	if (count > 10)
+ 		return -ETIMEDOUT;
+ 
+ 	for (i = 0; i < ARRAY_SIZE(amdtp_rate_table); ++i) {
+diff --git a/sound/firewire/fireface/ff-stream.c b/sound/firewire/fireface/ff-stream.c
+index 63b79c4a5405..5452115c0ef9 100644
+--- a/sound/firewire/fireface/ff-stream.c
++++ b/sound/firewire/fireface/ff-stream.c
+@@ -184,7 +184,6 @@ int snd_ff_stream_start_duplex(struct snd_ff *ff, unsigned int rate)
+ 	 */
+ 	if (!amdtp_stream_running(&ff->rx_stream)) {
+ 		int spd = fw_parent_device(ff->unit)->max_speed;
+-		unsigned int ir_delay_cycle;
+ 
+ 		err = ff->spec->protocol->begin_session(ff, rate);
+ 		if (err < 0)
+@@ -200,14 +199,7 @@ int snd_ff_stream_start_duplex(struct snd_ff *ff, unsigned int rate)
+ 		if (err < 0)
+ 			goto error;
+ 
+-		// The device postpones start of transmission mostly for several
+-		// cycles after receiving packets firstly.
+-		if (ff->spec->protocol == &snd_ff_protocol_ff800)
+-			ir_delay_cycle = 800;	// = 100 msec
+-		else
+-			ir_delay_cycle = 16;	// = 2 msec
+-
+-		err = amdtp_domain_start(&ff->domain, ir_delay_cycle);
++		err = amdtp_domain_start(&ff->domain, 0);
+ 		if (err < 0)
+ 			goto error;
+ 
+diff --git a/sound/isa/es1688/es1688.c b/sound/isa/es1688/es1688.c
+index ff3a05ad99c0..64610571a5e1 100644
+--- a/sound/isa/es1688/es1688.c
++++ b/sound/isa/es1688/es1688.c
+@@ -267,8 +267,10 @@ static int snd_es968_pnp_detect(struct pnp_card_link *pcard,
+ 		return error;
+ 	}
+ 	error = snd_es1688_probe(card, dev);
+-	if (error < 0)
++	if (error < 0) {
++		snd_card_free(card);
+ 		return error;
++	}
+ 	pnp_set_card_drvdata(pcard, card);
+ 	snd_es968_pnp_is_probed = 1;
+ 	return 0;
+diff --git a/sound/pci/hda/hda_intel.c b/sound/pci/hda/hda_intel.c
+index 8b015b27e9c7..29da0b03b895 100644
+--- a/sound/pci/hda/hda_intel.c
++++ b/sound/pci/hda/hda_intel.c
+@@ -2659,6 +2659,9 @@ static const struct pci_device_id azx_ids[] = {
+ 	{ PCI_DEVICE(0x1002, 0xab20),
+ 	  .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS |
+ 	  AZX_DCAPS_PM_RUNTIME },
++	{ PCI_DEVICE(0x1002, 0xab28),
++	  .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS |
++	  AZX_DCAPS_PM_RUNTIME },
+ 	{ PCI_DEVICE(0x1002, 0xab38),
+ 	  .driver_data = AZX_DRIVER_ATIHDMI_NS | AZX_DCAPS_PRESET_ATI_HDMI_NS |
+ 	  AZX_DCAPS_PM_RUNTIME },
+diff --git a/sound/pci/hda/patch_realtek.c b/sound/pci/hda/patch_realtek.c
+index e62d58872b6e..2c4575909441 100644
+--- a/sound/pci/hda/patch_realtek.c
++++ b/sound/pci/hda/patch_realtek.c
+@@ -8124,6 +8124,12 @@ static const struct snd_hda_pin_quirk alc269_pin_fixup_tbl[] = {
+ 		ALC225_STANDARD_PINS,
+ 		{0x12, 0xb7a60130},
+ 		{0x17, 0x90170110}),
++	SND_HDA_PIN_QUIRK(0x10ec0623, 0x17aa, "Lenovo", ALC283_FIXUP_HEADSET_MIC,
++		{0x14, 0x01014010},
++		{0x17, 0x90170120},
++		{0x18, 0x02a11030},
++		{0x19, 0x02a1103f},
++		{0x21, 0x0221101f}),
+ 	{}
+ };
+ 
+diff --git a/sound/soc/codecs/max9867.c b/sound/soc/codecs/max9867.c
+index 8600c5439e1e..2e4aa23b5a60 100644
+--- a/sound/soc/codecs/max9867.c
++++ b/sound/soc/codecs/max9867.c
+@@ -46,13 +46,13 @@ static const SNDRV_CTL_TLVD_DECLARE_DB_RANGE(max9867_micboost_tlv,
+ 
+ static const struct snd_kcontrol_new max9867_snd_controls[] = {
+ 	SOC_DOUBLE_R_TLV("Master Playback Volume", MAX9867_LEFTVOL,
+-			MAX9867_RIGHTVOL, 0, 41, 1, max9867_master_tlv),
++			MAX9867_RIGHTVOL, 0, 40, 1, max9867_master_tlv),
+ 	SOC_DOUBLE_R_TLV("Line Capture Volume", MAX9867_LEFTLINELVL,
+ 			MAX9867_RIGHTLINELVL, 0, 15, 1, max9867_line_tlv),
+ 	SOC_DOUBLE_R_TLV("Mic Capture Volume", MAX9867_LEFTMICGAIN,
+ 			MAX9867_RIGHTMICGAIN, 0, 20, 1, max9867_mic_tlv),
+ 	SOC_DOUBLE_R_TLV("Mic Boost Capture Volume", MAX9867_LEFTMICGAIN,
+-			MAX9867_RIGHTMICGAIN, 5, 4, 0, max9867_micboost_tlv),
++			MAX9867_RIGHTMICGAIN, 5, 3, 0, max9867_micboost_tlv),
+ 	SOC_SINGLE("Digital Sidetone Volume", MAX9867_SIDETONE, 0, 31, 1),
+ 	SOC_SINGLE_TLV("Digital Playback Volume", MAX9867_DACLEVEL, 0, 15, 1,
+ 			max9867_dac_tlv),
+diff --git a/sound/usb/card.c b/sound/usb/card.c
+index 827fb0bc8b56..8f559b505bb7 100644
+--- a/sound/usb/card.c
++++ b/sound/usb/card.c
+@@ -813,9 +813,6 @@ static int usb_audio_suspend(struct usb_interface *intf, pm_message_t message)
+ 	if (chip == (void *)-1L)
+ 		return 0;
+ 
+-	chip->autosuspended = !!PMSG_IS_AUTO(message);
+-	if (!chip->autosuspended)
+-		snd_power_change_state(chip->card, SNDRV_CTL_POWER_D3hot);
+ 	if (!chip->num_suspended_intf++) {
+ 		list_for_each_entry(as, &chip->pcm_list, list) {
+ 			snd_usb_pcm_suspend(as);
+@@ -828,6 +825,11 @@ static int usb_audio_suspend(struct usb_interface *intf, pm_message_t message)
+ 			snd_usb_mixer_suspend(mixer);
+ 	}
+ 
++	if (!PMSG_IS_AUTO(message) && !chip->system_suspend) {
++		snd_power_change_state(chip->card, SNDRV_CTL_POWER_D3hot);
++		chip->system_suspend = chip->num_suspended_intf;
++	}
++
+ 	return 0;
+ }
+ 
+@@ -841,10 +843,10 @@ static int __usb_audio_resume(struct usb_interface *intf, bool reset_resume)
+ 
+ 	if (chip == (void *)-1L)
+ 		return 0;
+-	if (--chip->num_suspended_intf)
+-		return 0;
+ 
+ 	atomic_inc(&chip->active); /* avoid autopm */
++	if (chip->num_suspended_intf > 1)
++		goto out;
+ 
+ 	list_for_each_entry(as, &chip->pcm_list, list) {
+ 		err = snd_usb_pcm_resume(as);
+@@ -866,9 +868,12 @@ static int __usb_audio_resume(struct usb_interface *intf, bool reset_resume)
+ 		snd_usbmidi_resume(p);
+ 	}
+ 
+-	if (!chip->autosuspended)
++ out:
++	if (chip->num_suspended_intf == chip->system_suspend) {
+ 		snd_power_change_state(chip->card, SNDRV_CTL_POWER_D0);
+-	chip->autosuspended = 0;
++		chip->system_suspend = 0;
++	}
++	chip->num_suspended_intf--;
+ 
+ err_out:
+ 	atomic_dec(&chip->active); /* allow autopm after this point */
+diff --git a/sound/usb/quirks-table.h b/sound/usb/quirks-table.h
+index bbae11605a4c..042a5e8eb79d 100644
+--- a/sound/usb/quirks-table.h
++++ b/sound/usb/quirks-table.h
+@@ -25,6 +25,26 @@
+ 	.idProduct = prod, \
+ 	.bInterfaceClass = USB_CLASS_VENDOR_SPEC
+ 
++/* HP Thunderbolt Dock Audio Headset */
++{
++	USB_DEVICE(0x03f0, 0x0269),
++	.driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++		.vendor_name = "HP",
++		.product_name = "Thunderbolt Dock Audio Headset",
++		.profile_name = "HP-Thunderbolt-Dock-Audio-Headset",
++		.ifnum = QUIRK_NO_INTERFACE
++	}
++},
++/* HP Thunderbolt Dock Audio Module */
++{
++	USB_DEVICE(0x03f0, 0x0567),
++	.driver_info = (unsigned long) &(const struct snd_usb_audio_quirk) {
++		.vendor_name = "HP",
++		.product_name = "Thunderbolt Dock Audio Module",
++		.profile_name = "HP-Thunderbolt-Dock-Audio-Module",
++		.ifnum = QUIRK_NO_INTERFACE
++	}
++},
+ /* FTDI devices */
+ {
+ 	USB_DEVICE(0x0403, 0xb8d8),
+diff --git a/sound/usb/usbaudio.h b/sound/usb/usbaudio.h
+index 6fe3ab582ec6..a42d021624dc 100644
+--- a/sound/usb/usbaudio.h
++++ b/sound/usb/usbaudio.h
+@@ -26,7 +26,7 @@ struct snd_usb_audio {
+ 	struct usb_interface *pm_intf;
+ 	u32 usb_id;
+ 	struct mutex mutex;
+-	unsigned int autosuspended:1;	
++	unsigned int system_suspend;
+ 	atomic_t active;
+ 	atomic_t shutdown;
+ 	atomic_t usage_count;
+diff --git a/tools/perf/util/probe-event.c b/tools/perf/util/probe-event.c
+index eea132f512b0..c6bcf5709564 100644
+--- a/tools/perf/util/probe-event.c
++++ b/tools/perf/util/probe-event.c
+@@ -1765,8 +1765,7 @@ int parse_probe_trace_command(const char *cmd, struct probe_trace_event *tev)
+ 	fmt1_str = strtok_r(argv0_str, ":", &fmt);
+ 	fmt2_str = strtok_r(NULL, "/", &fmt);
+ 	fmt3_str = strtok_r(NULL, " \t", &fmt);
+-	if (fmt1_str == NULL || strlen(fmt1_str) != 1 || fmt2_str == NULL
+-	    || fmt3_str == NULL) {
++	if (fmt1_str == NULL || fmt2_str == NULL || fmt3_str == NULL) {
+ 		semantic_error("Failed to parse event name: %s\n", argv[0]);
+ 		ret = -EINVAL;
+ 		goto out;
+diff --git a/tools/testing/selftests/ftrace/test.d/ftrace/tracing-error-log.tc b/tools/testing/selftests/ftrace/test.d/ftrace/tracing-error-log.tc
+index 021c03fd885d..23465823532b 100644
+--- a/tools/testing/selftests/ftrace/test.d/ftrace/tracing-error-log.tc
++++ b/tools/testing/selftests/ftrace/test.d/ftrace/tracing-error-log.tc
+@@ -14,6 +14,8 @@ if [ ! -f set_event ]; then
+     exit_unsupported
+ fi
+ 
++[ -f error_log ] || exit_unsupported
++
+ ftrace_errlog_check 'event filter parse error' '((sig >= 10 && sig < 15) || dsig ^== 17) && comm != bash' 'events/signal/signal_generate/filter'
+ 
+ exit 0
+diff --git a/tools/testing/selftests/networking/timestamping/rxtimestamp.c b/tools/testing/selftests/networking/timestamping/rxtimestamp.c
+index 6dee9e636a95..422e7761254d 100644
+--- a/tools/testing/selftests/networking/timestamping/rxtimestamp.c
++++ b/tools/testing/selftests/networking/timestamping/rxtimestamp.c
+@@ -115,6 +115,7 @@ static struct option long_options[] = {
+ 	{ "tcp", no_argument, 0, 't' },
+ 	{ "udp", no_argument, 0, 'u' },
+ 	{ "ip", no_argument, 0, 'i' },
++	{ NULL, 0, NULL, 0 },
+ };
+ 
+ static int next_port = 19999;
+diff --git a/tools/testing/selftests/tc-testing/tc-tests/filters/tests.json b/tools/testing/selftests/tc-testing/tc-tests/filters/tests.json
+index 8877f7b2b809..12aa4bc1f6a0 100644
+--- a/tools/testing/selftests/tc-testing/tc-tests/filters/tests.json
++++ b/tools/testing/selftests/tc-testing/tc-tests/filters/tests.json
+@@ -32,7 +32,7 @@
+         "setup": [
+             "$TC qdisc add dev $DEV2 ingress"
+         ],
+-        "cmdUnderTest": "$TC filter add dev $DEV2 protocol ip pref 1 parent ffff: handle 0xffffffff flower action ok",
++        "cmdUnderTest": "$TC filter add dev $DEV2 protocol ip pref 1 ingress handle 0xffffffff flower action ok",
+         "expExitCode": "0",
+         "verifyCmd": "$TC filter show dev $DEV2 ingress",
+         "matchPattern": "filter protocol ip pref 1 flower.*handle 0xffffffff",
+@@ -77,9 +77,9 @@
+         },
+         "setup": [
+             "$TC qdisc add dev $DEV2 ingress",
+-            "$TC filter add dev $DEV2 protocol ip prio 1 parent ffff: flower dst_mac e4:11:22:11:4a:51 src_mac e4:11:22:11:4a:50 ip_proto tcp src_ip 1.1.1.1 dst_ip 2.2.2.2 action drop"
++            "$TC filter add dev $DEV2 protocol ip prio 1 ingress flower dst_mac e4:11:22:11:4a:51 src_mac e4:11:22:11:4a:50 ip_proto tcp src_ip 1.1.1.1 dst_ip 2.2.2.2 action drop"
+         ],
+-        "cmdUnderTest": "$TC filter add dev $DEV2 protocol ip prio 1 parent ffff: flower dst_mac e4:11:22:11:4a:51 src_mac e4:11:22:11:4a:50 ip_proto tcp src_ip 1.1.1.1 dst_ip 2.2.2.2 action drop",
++        "cmdUnderTest": "$TC filter add dev $DEV2 protocol ip prio 1 ingress flower dst_mac e4:11:22:11:4a:51 src_mac e4:11:22:11:4a:50 ip_proto tcp src_ip 1.1.1.1 dst_ip 2.2.2.2 action drop",
+         "expExitCode": "2",
+         "verifyCmd": "$TC -s filter show dev $DEV2 ingress",
+         "matchPattern": "filter protocol ip pref 1 flower chain 0 handle",
+diff --git a/tools/testing/selftests/tc-testing/tdc_batch.py b/tools/testing/selftests/tc-testing/tdc_batch.py
+index 6a2bd2cf528e..995f66ce43eb 100755
+--- a/tools/testing/selftests/tc-testing/tdc_batch.py
++++ b/tools/testing/selftests/tc-testing/tdc_batch.py
+@@ -72,21 +72,21 @@ mac_prefix = args.mac_prefix
+ 
+ def format_add_filter(device, prio, handle, skip, src_mac, dst_mac,
+                       share_action):
+-    return ("filter add dev {} {} protocol ip parent ffff: handle {} "
++    return ("filter add dev {} {} protocol ip ingress handle {} "
+             " flower {} src_mac {} dst_mac {} action drop {}".format(
+                 device, prio, handle, skip, src_mac, dst_mac, share_action))
+ 
+ 
+ def format_rep_filter(device, prio, handle, skip, src_mac, dst_mac,
+                       share_action):
+-    return ("filter replace dev {} {} protocol ip parent ffff: handle {} "
++    return ("filter replace dev {} {} protocol ip ingress handle {} "
+             " flower {} src_mac {} dst_mac {} action drop {}".format(
+                 device, prio, handle, skip, src_mac, dst_mac, share_action))
+ 
+ 
+ def format_del_filter(device, prio, handle, skip, src_mac, dst_mac,
+                       share_action):
+-    return ("filter del dev {} {} protocol ip parent ffff: handle {} "
++    return ("filter del dev {} {} protocol ip ingress handle {} "
+             "flower".format(device, prio, handle))
+ 
+ 
+diff --git a/virt/kvm/arm/aarch32.c b/virt/kvm/arm/aarch32.c
+index 0a356aa91aa1..f2047fc69006 100644
+--- a/virt/kvm/arm/aarch32.c
++++ b/virt/kvm/arm/aarch32.c
+@@ -33,6 +33,26 @@ static const u8 return_offsets[8][2] = {
+ 	[7] = { 4, 4 },		/* FIQ, unused */
+ };
+ 
++static bool pre_fault_synchronize(struct kvm_vcpu *vcpu)
++{
++	preempt_disable();
++	if (kvm_arm_vcpu_loaded(vcpu)) {
++		kvm_arch_vcpu_put(vcpu);
++		return true;
++	}
++
++	preempt_enable();
++	return false;
++}
++
++static void post_fault_synchronize(struct kvm_vcpu *vcpu, bool loaded)
++{
++	if (loaded) {
++		kvm_arch_vcpu_load(vcpu, smp_processor_id());
++		preempt_enable();
++	}
++}
++
+ /*
+  * When an exception is taken, most CPSR fields are left unchanged in the
+  * handler. However, some are explicitly overridden (e.g. M[4:0]).
+@@ -155,7 +175,10 @@ static void prepare_fault32(struct kvm_vcpu *vcpu, u32 mode, u32 vect_offset)
+ 
+ void kvm_inject_undef32(struct kvm_vcpu *vcpu)
+ {
++	bool loaded = pre_fault_synchronize(vcpu);
++
+ 	prepare_fault32(vcpu, PSR_AA32_MODE_UND, 4);
++	post_fault_synchronize(vcpu, loaded);
+ }
+ 
+ /*
+@@ -168,6 +191,9 @@ static void inject_abt32(struct kvm_vcpu *vcpu, bool is_pabt,
+ 	u32 vect_offset;
+ 	u32 *far, *fsr;
+ 	bool is_lpae;
++	bool loaded;
++
++	loaded = pre_fault_synchronize(vcpu);
+ 
+ 	if (is_pabt) {
+ 		vect_offset = 12;
+@@ -191,6 +217,8 @@ static void inject_abt32(struct kvm_vcpu *vcpu, bool is_pabt,
+ 		/* no need to shuffle FS[4] into DFSR[10] as its 0 */
+ 		*fsr = DFSR_FSC_EXTABT_nLPAE;
+ 	}
++
++	post_fault_synchronize(vcpu, loaded);
+ }
+ 
+ void kvm_inject_dabt32(struct kvm_vcpu *vcpu, unsigned long addr)
+diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
+index eda7b624eab8..0aca5514a58b 100644
+--- a/virt/kvm/arm/arm.c
++++ b/virt/kvm/arm/arm.c
+@@ -332,6 +332,16 @@ void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu)
+ 	preempt_enable();
+ }
+ 
++#ifdef CONFIG_ARM64
++#define __ptrauth_save_key(regs, key)						\
++({										\
++	regs[key ## KEYLO_EL1] = read_sysreg_s(SYS_ ## key ## KEYLO_EL1);	\
++	regs[key ## KEYHI_EL1] = read_sysreg_s(SYS_ ## key ## KEYHI_EL1);	\
++})
++#else
++#define  __ptrauth_save_key(regs, key)	do { } while (0)
++#endif
++
+ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
+ {
+ 	int *last_ran;
+@@ -365,7 +375,17 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
+ 	else
+ 		vcpu_set_wfx_traps(vcpu);
+ 
+-	vcpu_ptrauth_setup_lazy(vcpu);
++	if (vcpu_has_ptrauth(vcpu)) {
++		struct kvm_cpu_context __maybe_unused *ctxt = vcpu->arch.host_cpu_context;
++
++		__ptrauth_save_key(ctxt->sys_regs, APIA);
++		__ptrauth_save_key(ctxt->sys_regs, APIB);
++		__ptrauth_save_key(ctxt->sys_regs, APDA);
++		__ptrauth_save_key(ctxt->sys_regs, APDB);
++		__ptrauth_save_key(ctxt->sys_regs, APGA);
++
++		vcpu_ptrauth_disable(vcpu);
++	}
+ }
+ 
+ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
+diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
+index 70f03ce0e5c1..412c85d90f18 100644
+--- a/virt/kvm/kvm_main.c
++++ b/virt/kvm/kvm_main.c
+@@ -157,10 +157,9 @@ static void kvm_uevent_notify_change(unsigned int type, struct kvm *kvm);
+ static unsigned long long kvm_createvm_count;
+ static unsigned long long kvm_active_vms;
+ 
+-__weak int kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm,
+-		unsigned long start, unsigned long end, bool blockable)
++__weak void kvm_arch_mmu_notifier_invalidate_range(struct kvm *kvm,
++						   unsigned long start, unsigned long end)
+ {
+-	return 0;
+ }
+ 
+ bool kvm_is_zone_device_pfn(kvm_pfn_t pfn)
+@@ -378,6 +377,18 @@ static inline struct kvm *mmu_notifier_to_kvm(struct mmu_notifier *mn)
+ 	return container_of(mn, struct kvm, mmu_notifier);
+ }
+ 
++static void kvm_mmu_notifier_invalidate_range(struct mmu_notifier *mn,
++					      struct mm_struct *mm,
++					      unsigned long start, unsigned long end)
++{
++	struct kvm *kvm = mmu_notifier_to_kvm(mn);
++	int idx;
++
++	idx = srcu_read_lock(&kvm->srcu);
++	kvm_arch_mmu_notifier_invalidate_range(kvm, start, end);
++	srcu_read_unlock(&kvm->srcu, idx);
++}
++
+ static void kvm_mmu_notifier_change_pte(struct mmu_notifier *mn,
+ 					struct mm_struct *mm,
+ 					unsigned long address,
+@@ -402,7 +413,6 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn,
+ {
+ 	struct kvm *kvm = mmu_notifier_to_kvm(mn);
+ 	int need_tlb_flush = 0, idx;
+-	int ret;
+ 
+ 	idx = srcu_read_lock(&kvm->srcu);
+ 	spin_lock(&kvm->mmu_lock);
+@@ -419,14 +429,9 @@ static int kvm_mmu_notifier_invalidate_range_start(struct mmu_notifier *mn,
+ 		kvm_flush_remote_tlbs(kvm);
+ 
+ 	spin_unlock(&kvm->mmu_lock);
+-
+-	ret = kvm_arch_mmu_notifier_invalidate_range(kvm, range->start,
+-					range->end,
+-					mmu_notifier_range_blockable(range));
+-
+ 	srcu_read_unlock(&kvm->srcu, idx);
+ 
+-	return ret;
++	return 0;
+ }
+ 
+ static void kvm_mmu_notifier_invalidate_range_end(struct mmu_notifier *mn,
+@@ -532,6 +537,7 @@ static void kvm_mmu_notifier_release(struct mmu_notifier *mn,
+ }
+ 
+ static const struct mmu_notifier_ops kvm_mmu_notifier_ops = {
++	.invalidate_range	= kvm_mmu_notifier_invalidate_range,
+ 	.invalidate_range_start	= kvm_mmu_notifier_invalidate_range_start,
+ 	.invalidate_range_end	= kvm_mmu_notifier_invalidate_range_end,
+ 	.clear_flush_young	= kvm_mmu_notifier_clear_flush_young,


^ permalink raw reply related	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2020-06-17 16:41 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2020-06-17 16:41 [gentoo-commits] proj/linux-patches:5.6 commit in: / Mike Pagano
  -- strict thread matches above, loose matches on Subject: below --
2020-06-10 19:41 Mike Pagano
2020-06-07 21:54 Mike Pagano
2020-06-03 11:44 Mike Pagano
2020-05-27 16:32 Mike Pagano
2020-05-20 23:13 Mike Pagano
2020-05-20 11:35 Mike Pagano
2020-05-14 11:34 Mike Pagano
2020-05-13 16:48 Mike Pagano
2020-05-13 12:06 Mike Pagano
2020-05-11 22:46 Mike Pagano
2020-05-09 19:45 Mike Pagano
2020-05-06 11:47 Mike Pagano
2020-05-02 19:25 Mike Pagano
2020-05-02 13:26 Mike Pagano
2020-04-29 17:55 Mike Pagano
2020-04-23 11:56 Mike Pagano
2020-04-21 11:24 Mike Pagano
2020-04-17 14:50 Mike Pagano
2020-04-15 15:40 Mike Pagano
2020-04-13 12:21 Mike Pagano
2020-04-12 15:29 Mike Pagano
2020-04-08 17:39 Mike Pagano
2020-04-08 12:45 Mike Pagano
2020-04-02 11:37 Mike Pagano
2020-04-02 11:35 Mike Pagano
2020-04-01 12:06 Mike Pagano
2020-03-30 12:31 Mike Pagano
2020-03-30 11:33 Mike Pagano
2020-03-30 11:15 Mike Pagano

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox